close
close

first Drop

Com TW NOw News 2024

Trump Posted Fake Taylor Swift Image. AI and Deepfakes Are Only Going to Get Worse This Election Cycle – The Vacaville Reporter
news

Trump Posted Fake Taylor Swift Image. AI and Deepfakes Are Only Going to Get Worse This Election Cycle – The Vacaville Reporter

Queenie Wong and Wendy Lee | (TNS) Los Angeles Times

The patriotic image shows megastar Taylor Swift dressed as Uncle Sam and falsely suggests she supports Republican presidential candidate Donald Trump.

“Taylor wants you to vote for Donald Trump,” reads the image, which appears to have been generated by artificial intelligence.

Over the weekend, Trump further amplified the lie by sharing the photo along with other images of Swift fans expressing their support to his 7.6 million followers on his social network Truth Social.

Deception has long played a role in politics, but the rise of artificial intelligence tools that allow people to quickly generate fake images or videos by typing a sentence adds another layer of complexity to a familiar problem on social media. These digitally altered images and videos, known as deepfakes, can make it appear as if someone is saying or doing something they are not.

As the battle between Trump and Democratic candidate Kamala Harris intensifies, disinformation experts are sounding the alarm about the risks of generative AI.

“I’m concerned that as we get closer to the election, this is going to explode,” said Emilio Ferrara, a professor of computer science at the USC Viterbi School of Engineering. “It’s going to be much worse than it is now.”

Platforms like Facebook and X, formerly known as Twitter, have rules against manipulated images, audio and video, but they’ve struggled to enforce these policies as AI-generated content floods the internet. Faced with accusations that they censor political speech, they’ve focused more on labeling content and fact-checking, rather than removing posts. And there are exceptions to the rules, such as satire, that allow people to create and share fake images online.

“We have all the problems of the past, all the myths and disagreements and general stupidity, that we’ve been dealing with for 10 years,” said Hany Farid, a UC Berkeley professor who focuses on misinformation and digital forensics. “Now we’ve supercharged it with generative AI and we’re really, really biased.”

With interest in OpenAI, the maker of the popular generative AI tool ChatGPT, growing, tech companies are encouraging people to adopt new AI tools that can generate text, images and video.

Farid, who analyzed the Swift images Trump shared, said they appear to be a mix of both real and fake images, a “cunning” way to spread misleading content.

People share fake images for different reasons. They may do it just to go viral on social media or to troll others. Visual images are a powerful part of propaganda, and distort people’s views on politics, including the legitimacy of the 2024 presidential election, he said.

On X, images that appear to be generated by AI show Swift hugging Trump, holding his hand or singing a duet while the Republican strummed a guitar. Social media users have also used other methods to falsely claim Swift supported Trump.

X called a video falsely claiming Swift supported Trump “manipulated media.” The video, posted in February, uses footage of Swift at the 2024 Grammys and makes it appear as if she’s holding a sign that reads, “Trump won. Democrats cheated!”

Political campaigns are preparing for the impact of AI on elections.

Vice President Harris’ campaign has an interagency team “to prepare for the potential impacts of AI in this election, including the threat of malicious deepfakes,” spokeswoman Mia Ehrenberg said in a statement. The campaign is only authorizing the use of AI for “productivity tools” such as data analytics, she added.

Trump’s campaign did not respond to a request for comment.

Part of the challenge in curbing fake or manipulated videos is that the federal law governing social media activities doesn’t specifically address deepfakes. The Communications Decency Act of 1996 doesn’t hold social media companies liable for hosting content, as long as they don’t aid or control those who posted it.

But over the years, tech companies have been criticized for what appears on their platforms. To address this, many social media companies have created content moderation guidelines, such as banning hate speech.

“It really is a tightrope walk for social media companies and online operators,” said Joanna Rosen Forster, a partner at law firm Crowell & Moring.

Lawmakers are trying to address this problem by introducing bills that would require social media companies to remove unauthorized deepfakes.

Gov. Gavin Newsom said in July that he supports legislation that would make it illegal to use AI to alter someone’s voice in a campaign ad. The comments were in response to a video shared by billionaire Elon Musk, owner of X, in which AI is used to clone Harris’ voice. Musk, who has supported Trump, later clarified that the video he shared was a parody.

The Screen Actors Guild-American Federation of Television and Radio Artists is one of the groups pushing for laws against deepfakes.

Duncan Crabtree-Ireland, national executive director and chief negotiator for SAG-AFTRA, said social media companies are not doing enough to address the problem.

“Disinformation and outright lies spread by deepfakes can never truly be undone,” Crabtree-Ireland said. “Especially because elections are often decided by small margins and through complex, arcane systems like the Electoral College, these deepfake-powered lies can have devastating real-world consequences.”

Crabtree-Ireland has experienced the problem first-hand. Last year, he was the subject of a deepfake video that circulated on Instagram during a campaign to ratify a contract. The video, which featured fake footage of Crabtree-Ireland urging members to vote against a contract he had negotiated, was viewed tens of thousands of times. And despite the caption being “deepfake,” he received dozens of messages from union members asking him about it.

It took several days for Instagram to remove the deepfake video, he said.

“I found it very insulting,” Crabtree-Ireland said. “They shouldn’t be stealing my voice and my face to argue a case that I don’t agree with.”

With Harris and Trump in a close race, it’s not surprising that both candidates are leaning on celebrities to appeal to voters. Harris’ campaign embraced pop star Charli XCX’s portrayal of the candidate as a “brat” and used popular songs like Beyoncé’s “Freedom” and Chappell Roan’s “Femininomenon” to promote the Democratic black and Asian American female presidential candidate. Musicians Kid Rock, Jason Aldean and Ye, formerly known as Kanye West, have also thrown their support behind Trump, who was the target of an assassination attempt in July.

Swift, who has previously been targeted by deepfakes, has never publicly endorsed a candidate for the 2024 presidential election, but she has criticized Trump in the past. In the 2020 documentary “Miss Americana,” Swift says in a tearful conversation with her parents and team that she regrets not speaking out against Trump during the 2016 election and criticizes Tennessee Republican Marsha Blackburn, who was running for U.S. Senate at the time, as “Trump in a wig.”

Swift’s publicist, Tree Paine, did not respond to a request for comment.

AI-powered chatbots from platforms like Meta, X, and OpenAI make it easy for humans to create fictional images. While news organizations have found that X’s AI chatbot Grok can generate images of election fraud, other chatbots are more restrictive.

Meta AI’s chatbot refused to create images of Swift supporting Trump.

“I am not allowed to generate images that could be used to spread misinformation or give the impression that a public figure has supported a particular political candidate,” the Meta AI chatbot responded.

Meta and TikTok cited their efforts to label AI-generated content and work with fact-checkers. For example, TikTok said that an AI-generated video that falsely portrays an individual or group as a political endorsement of a public figure is not allowed. X did not respond to a request for comment.

When asked how Truth Social moderates AI-generated content, the platform’s parent company, Trump Media and Technology Group Corp., accused journalists of “demanding more censorship.” Truth Social’s community guidelines include rules against posting fraud and spam, but do not specify how it handles AI-generated content.

As social media platforms face threats of regulation and lawsuits, some misinformation experts are skeptical that social networks are doing a good job of moderating misleading content.

Social networks make most of their money from advertising, so keeping users on the platforms longer is “good for business,” Farid said.

“What people are concerned with is the absolute most conspiratorial, hateful, lustful, angry content,” he said. “That’s who we are as human beings.”

It’s a harsh reality that even Swifties can’t escape.

____

Contributor Mikael Wood contributed to this report.

©2024 Los Angeles Times. Visit latimes.com. Distributed by Tribune Content Agency, LLC.

Originally published: