By Published: Aug. 29, 2024

Banner image: Image of a polling place generated using artificial intelligence. (Credit: Adobe Stock)

On Aug. 18, former president and current presidential candidate Donald Trump posted an unusual endorsement to his social media account on Truth Social. Amid a series of photos, he included an image of pop megastar Taylor Swift wearing an Uncle Sam hat and declaring: ā€œTaylor wants you to vote for Donald Trump.ā€

Casey Fiesler headshot

Casey Fiesler

Kai Larsen headshot

Kai Larsen

Toby Hopp headshot

Toby Hopp

The problem: It wasnā€™t real. Swift hadnā€™t, and still hasnā€™t, endorsed a candidate for the 2024 presidential election. The image may have been generated by artificial intelligence.

Casey Fiesler, associate professor in the Department of Information Science at CU Ā鶹¹ŁĶų, sees the rise of AI in politics as a worrying trend. This month, for example, NPR reporter Huo Jingnan tested out Grok, a new AI platform launched by the social media company X. She was able to to create surprisingly realistic security camera images of people stuffing envelopes into ballot drop boxes in the dead of night.

ā€œItā€™s not like fake images weren't a thing before this year,ā€ Fiesler said. ā€œThe difference is that it's so much easier to do now. AI is democratizing this type of bad acting.ā€

To help voters navigate this new and perilous election information landscape, CU Ā鶹¹ŁĶų Today spoke to Fiesler and other experts in AI and media literacy. They include Kai Larsen, professor of information systems in the Leeds School of Business, and Toby Hopp, associate professor in the Department of Advertising, Public Relations and Media Design.Ģż

These experts discuss how you can find out if a photo youā€™re seeing online is the real dealā€”and how to talk to friends and family members who are spreading misinformation.

Yes, AI really is that good

In the past, AI-generated images often left behind ā€œartifacts,ā€ such as hands with six fingers, that eagle-eyed viewers could spot. But those sorts of mistakes are getting easier to fix in still images. Video is not far behind, said Fiesler, who covers the ethics of AI in a course sheā€™s teaching this fall called ā€œEthical and Policy Dimensions of Information and Technology.ā€

ā€œAt some point soon, you will be able to see an AI-generated video of the head of the CDC giving a press conference, and it will totally fool you,ā€ she said.

At the same time, the algorithms that govern social media platforms like TikTok and Instagram can trap users in downward spirals of misinformation, Larsen said. Heā€™s the co-author of the 2021 book .

ā€œAlgorithms, at least historically, have been driving people into these rabbit holes,ā€ Larsen said. ā€œIf you are willing to believe one piece of misinformation, then the algorithm is now finding out that you like conspiracy theories. So why not feed you more of them?ā€

Tech probably wonā€™t save us

A range of companies now offer services that, they claim, can detect AI-generated content, including fake images. But just like human eyes, those tools can be easily tricked, Larsen said. Some critics of AI have also urged tech companies to add digital ā€œwatermarksā€ to AI content. These watermarks would flag photos or text that had originally come from an AI platform.Ģż

ā€œThe problem with watermarks is that they are often fairly easy to get rid of,ā€ Larsen said. ā€œOr you can just find another large language model that doesnā€™t use them.ā€Google it

When it comes to AI images, a little searching online can go a long way, Fiesler said.

Screenshot of a Truth Social post from Donald J. Trump in which he says "I accept!" above a series of images.

Click to enlarge
In his post to Truth Social, Trump included several AI-generated images of fake Trump supporters, alongside a real photo (upper right) of a woman wearing a "Swifties for Trump" T-shirt.

Screenshot of an X post featuring an image of a woman seen from behind speaking to a crowd with a hammer and sickle communist flag hanging above

Click to enlarge
In a post to X, Trump shared an AI-generated image of a woman, who looks like Kamala Harris, speaking to a gathering of Communists.

Earlier in August, Trumpā€™s campaign accused Kamala Harrisā€™ team of using AI to make the crowd size look bigger in a photo of one of her rallies. Fiesler ran a quick Google image search on the image. She discovered that numerous news organizations had covered the same event, and dozens of other photos and videos existed, all showing the same large crowd.

ā€œGoogle it,ā€ Fiesler said. ā€œFind out: Are news organizations writing about this same event? Do other photographs exist?ā€

Hopp, a scholar who studies fake news or what he prefers to call ā€œcountermedia,ā€ cautions social media users to beware of posts that try to trigger our worst impulses. In 2016, troll farms in Russia posted thousands of misleading ads about the presidential election to social media. Many tried to tap into negative emotions, pitting Americans on the right and left against each other.

ā€œWe can evaluate a piece of information and ask ourselves: ā€˜Is this trying to make me angry? Is this trying to make me upset?ā€™ā€ Hopp said. ā€œIf so, we may want to ask: ā€˜Is there a possibility that this might be misleading?ā€™ā€

What about friends and family?

Itā€™s a familiar problem for many peopleā€”a friend or family member who wonā€™t stop sharing misleading social media posts. Dealing with that kind of loved one can be a minefield, Hopp said. Research shows that simply challenging people on their false beliefs (say, that the Earth is flat) often wonā€™t change their minds. It may even make them double down.

He and other researchers have experimented with giving social media users ā€œā€ or basic info on how to tell fact from fiction. Such interventions can help, but not as much as Hopp would hope.Ģż

ā€œI do think that sober, empathetic and caring discussions with those who are important to us about media literacy can be important for helping people use different strategies when they're on social media platforms,ā€ he said. ā€œBut thereā€™s no silver bullet.ā€

What can we do in the long-term?

Can anything help to slow the spread of misleading AI images online?

Fiesler sees an urgent need for the federal government to step in to regulate the rapidly growing AI industry. She said that a starting point could be the which the White Houseā€™s Office of Science and Technology Policy drafted in 2022.Ģż

This blueprint, which has not been passed into law, includes recommendations like: ā€œYou should know that an automated system is being used and understand how and why it contributes to outcomes that impact you.ā€

Hopp, for his part, believes that a lot of the responsibility for stopping political misinformation comes down to another group: politicians. Itā€™s time to cool down the temperature of political discussions in the United States.

ā€œThereā€™s a role for our political leaders to discourage the use of hyper-partisan, divisive information,ā€ Hopp said. ā€œEmbracing this type of misleading information creates conditions that are fairly ripe for its spread.ā€