close
close

first Drop

Com TW NOw News 2024

AI hasn’t done much to influence voters — with one notable exception
news

AI hasn’t done much to influence voters — with one notable exception

But the massive devastation they expected hasn’t quite materialized. Instead of deepfakes of political candidates fooling candidates and creating fact-checking nightmares, AI has mostly been used by supporters to generate obvious meme art.

Perhaps the biggest impact of AI this year was convincing Taylor Swift to endorse Democratic presidential candidate Kamala Harris.

In an Instagram post supporting Harris on Tuesday, the megastar said her endorsement was partly influenced by an AI image Trump posted of her, in which the pop star wore a ridiculously oversized American flag hat with the message “Taylor wants you to vote for Donald Trump.”

“It really brought to the forefront my fears around AI and the dangers of spreading misinformation,” Swift wrote in her post. “It made me realize that I need to be completely transparent about my true plans for this election as a voter. The easiest way to combat misinformation is with the truth.”

And the AI-shy Swift is in good company: experts and the media have raised the alarm that AI could cause a “tech-enabled Armageddon,” that we’ve only seen the “tip of the iceberg,” and that “deepfakes threaten to disrupt global elections.”

But while there have been attempts to use AI to influence voters — like the fake Joe Biden robocall in New Hampshire, or a deepfake campaign video of Kamala Harris — they don’t seem to really fool anyone.

Many AI creations came in the form of fairly obvious memes and satirical videos shared on social media. Moreover, fact-checkers (including fact-checkers on platforms like X’s Community Notes) have been quick to shoot down any AI content that was even remotely convincing.

Even the most sinister attempts, involving foreign actors using AI to spread disinformation, may be a bit of an exaggeration.

For example, Meta wrote in its most recent Adversarial Threat Report that while Russian, Chinese, and Iranian disinformation campaigns have used AI, their “GenAI-powered tactics” have yielded “only incremental gains in productivity and content generation.”

And Microsoft, in its most recent Threat Intelligence Report from August, also debunked the idea that AI has made foreign influence campaigns more effective.

Microsoft writes that in identifying Russian and Chinese influence operations, it found that both “have deployed generative AI, but with limited to no impact.” Microsoft also said that another Russian operation, first reported by the company in April, “has repeatedly deployed generative AI in its campaigns, but with little effect.”

“In aggregate,” Microsoft continued in its report, “we saw nearly all actors seeking to integrate AI content into their operations, but recently many actors have reverted to techniques that have proven effective in the past: simple digital manipulations, misrepresentation of content, and the use of familiar labels or logos on top of false information.”

And it’s not just in the US; recent elections around the world have been barely affected by AI.

The Australian Strategic Policy Institute, which analysed instances of AI-generated disinformation surrounding the UK election in July, concluded in a recent report that voters never faced the feared “tsunami of fake AI messages targeting political candidates.”

“In the UK, there were only a few examples of this kind of content going viral during the campaign period,” explains Sam Stockwell, a researcher at ASPI.

But, he added, “while there is no evidence that these examples influenced a large number of votes,” there were “spikes in online harassment of the people targeted by the fakes” and “confusion among the public about whether the content was authentic.”

A study published in May by the UK’s Alan Turing Institute found that of the 112 national elections that have taken place or are about to take place since the start of 2023, only 19 have involved AI interference.

“Existing examples of AI misuse in elections are scarce and often amplified by the mainstream media,” the paper’s authors wrote. “This can serve to amplify public fears and inflate the perceived threat of AI to electoral processes.”

But while the researchers found that the “current impact of AI on specific election outcomes is limited,” the threats are certainly present. “show signs of damage to the broader democratic system.”