Banner image: Image of a polling place generated using artificial intelligence. (Credit: Adobe Stock)
On Aug. 18, former president and current presidential candidate Donald Trump posted an unusual endorsement to his social media account on Truth Social. Amid a series of photos, he included an image of pop megastar Taylor Swift wearing an Uncle Sam hat and declaring: āTaylor wants you to vote for Donald Trump.ā
The problem: It wasnāt real. Swift hadnāt, and still hasnāt, endorsed a candidate for the 2024 presidential election. The image may have been generated by artificial intelligence.
Casey Fiesler, associate professor in the Department of Information Science at CU Ā鶹¹ŁĶų, sees the rise of AI in politics as a worrying trend. This month, for example, NPR reporter Huo Jingnan tested out Grok, a new AI platform launched by the social media company X. She was able to to create surprisingly realistic security camera images of people stuffing envelopes into ballot drop boxes in the dead of night.
āItās not like fake images weren't a thing before this year,ā Fiesler said. āThe difference is that it's so much easier to do now. AI is democratizing this type of bad acting.ā
To help voters navigate this new and perilous election information landscape, CU Ā鶹¹ŁĶų Today spoke to Fiesler and other experts in AI and media literacy. They include Kai Larsen, professor of information systems in the Leeds School of Business, and Toby Hopp, associate professor in the Department of Advertising, Public Relations and Media Design.Ģż
These experts discuss how you can find out if a photo youāre seeing online is the real dealāand how to talk to friends and family members who are spreading misinformation.
Yes, AI really is that good
In the past, AI-generated images often left behind āartifacts,ā such as hands with six fingers, that eagle-eyed viewers could spot. But those sorts of mistakes are getting easier to fix in still images. Video is not far behind, said Fiesler, who covers the ethics of AI in a course sheās teaching this fall called āEthical and Policy Dimensions of Information and Technology.ā
āAt some point soon, you will be able to see an AI-generated video of the head of the CDC giving a press conference, and it will totally fool you,ā she said.
At the same time, the algorithms that govern social media platforms like TikTok and Instagram can trap users in downward spirals of misinformation, Larsen said. Heās the co-author of the 2021 book .
āAlgorithms, at least historically, have been driving people into these rabbit holes,ā Larsen said. āIf you are willing to believe one piece of misinformation, then the algorithm is now finding out that you like conspiracy theories. So why not feed you more of them?ā
Tech probably wonāt save us
A range of companies now offer services that, they claim, can detect AI-generated content, including fake images. But just like human eyes, those tools can be easily tricked, Larsen said. Some critics of AI have also urged tech companies to add digital āwatermarksā to AI content. These watermarks would flag photos or text that had originally come from an AI platform.Ģż
āThe problem with watermarks is that they are often fairly easy to get rid of,ā Larsen said. āOr you can just find another large language model that doesnāt use them.āGoogle it
When it comes to AI images, a little searching online can go a long way, Fiesler said.
Earlier in August, Trumpās campaign accused Kamala Harrisā team of using AI to make the crowd size look bigger in a photo of one of her rallies. Fiesler ran a quick Google image search on the image. She discovered that numerous news organizations had covered the same event, and dozens of other photos and videos existed, all showing the same large crowd.
āGoogle it,ā Fiesler said. āFind out: Are news organizations writing about this same event? Do other photographs exist?ā
Hopp, a scholar who studies fake news or what he prefers to call ācountermedia,ā cautions social media users to beware of posts that try to trigger our worst impulses. In 2016, troll farms in Russia posted thousands of misleading ads about the presidential election to social media. Many tried to tap into negative emotions, pitting Americans on the right and left against each other.
āWe can evaluate a piece of information and ask ourselves: āIs this trying to make me angry? Is this trying to make me upset?āā Hopp said. āIf so, we may want to ask: āIs there a possibility that this might be misleading?āā
What about friends and family?
Itās a familiar problem for many peopleāa friend or family member who wonāt stop sharing misleading social media posts. Dealing with that kind of loved one can be a minefield, Hopp said. Research shows that simply challenging people on their false beliefs (say, that the Earth is flat) often wonāt change their minds. It may even make them double down.
He and other researchers have experimented with giving social media users āā or basic info on how to tell fact from fiction. Such interventions can help, but not as much as Hopp would hope.Ģż
āI do think that sober, empathetic and caring discussions with those who are important to us about media literacy can be important for helping people use different strategies when they're on social media platforms,ā he said. āBut thereās no silver bullet.ā
What can we do in the long-term?
Can anything help to slow the spread of misleading AI images online?
Fiesler sees an urgent need for the federal government to step in to regulate the rapidly growing AI industry. She said that a starting point could be the which the White Houseās Office of Science and Technology Policy drafted in 2022.Ģż
This blueprint, which has not been passed into law, includes recommendations like: āYou should know that an automated system is being used and understand how and why it contributes to outcomes that impact you.ā
Hopp, for his part, believes that a lot of the responsibility for stopping political misinformation comes down to another group: politicians. Itās time to cool down the temperature of political discussions in the United States.
āThereās a role for our political leaders to discourage the use of hyper-partisan, divisive information,ā Hopp said. āEmbracing this type of misleading information creates conditions that are fairly ripe for its spread.ā