Opinion: How to prevent AI-powered election manipulation attempts


This election cycle, headlines have been dominated by unprecedented events, including Donald Trump’s criminal conviction, the attempt on his life, Joe Biden’s disastrous debate performance, and his replacement on the Democratic ticket by Vice President Kamala Harris. Unsurprisingly, other major political developments have been overshadowed, including the steady trickle of AI-enhanced attempts to influence voters.

During the presidential primaries, a Biden's fake robocall urged New Hampshire voters to wait until November to cast their ballots. In July, Elon Musk shared a video that included a voice imitating that of Kamala Harris. saying things he didn't say. Originally labeled as a parody, the clip quickly morphed into an untagged post on X with more than 130 million views, highlighting the challenge facing voters.

More recently, Trump used concerns about AI as a false weapon: Claiming a photo from a Harris rally was generated by AIsuggesting the crowd was not real. And a fake photo of the attempted assassination of the former president He altered the faces of Secret Service agents That's why they seem to be smiling, promoting the false theory that the shooting was planned.

It is clear that when it comes to AI manipulation, the voting public has to be prepared for anything.

Voters wouldn’t be in this situation if candidates had clear policies about using AI in their campaigns. Written guidelines about when and how campaigns intend to use AI would allow people to compare candidates’ use of the technology to their stated policies. This would help voters assess whether candidates practice what they preach. If a politician pushes to watermark AI so people can identify when it’s being used, for example, they should use that label on their own AI in ads and other campaign materials.

Policy statements on AI can also help people protect themselves from malicious actors trying to manipulate their votes. And the lack of reliable means to evaluate AI use undermines the value the technology could bring to elections if deployed properly, fairly, and with full transparency.

It's not that politicians don't use AI. In fact, companies like Google and Microsoft have acknowledged that they use it. I have trained dozens of campaigns and political groups on the use of generative AI tools.

Major tech companies released a set of principles Earlier this year, lawmakers promised to guide the use of AI in elections. They also promised to develop technology to detect and label realistic content created with generative AI and educate the public about its use. However, these commitments lack means for enforcement.

Government regulators have responded to concerns about AI's effect on elections. In February, following the unauthorized New Hampshire robocall, The Federal Communications Commission took action to make such tactics illegal. The consultant who planned the call was fined $6 million, and the telecom company that made the calls was fined $2 million. But while the FCC wants to require disclosure of AI use in broadcast ads, the chairman of the Federal Election Commission announced last month that the agency was Ends its consideration of regulating AI in political adsFEC officials said that would exceed their authority and that they would await Congress's direction on the issue.

California and other states require disclaimers when using technology, but only when there is an intent of malice. Michigan and Washington require disclosure of any use of AI, and Minnesota, Georgia, Texas and Indiana have passed outright bans on the use of AI in political ads.

It’s probably too late in this election cycle to expect campaigns to begin disclosing their AI practices. So the onus is on voters to remain vigilant about AI, much in the same way that other technologies, such as self-checkouts at grocery stores and other retailers, have shifted the responsibility to consumers.

Voters cannot rely on the election information arriving in their mailboxes, inboxes, and social media platforms to be free of technological manipulation. They should take note of who has funded the distribution of such materials and look for obvious signs of AI use in images, such as missing fingers or mismatched earrings. Voters should be aware of the source of the information they are consuming, how it was verified, and how it is being shared. All of this will contribute to greater information literacy, which, along with critical thinking, is a skill voters will need to fill out their ballots this fall.

Ann G. Skeet is the Senior Director of Leadership Ethics and Juan P. Pelissero is the director of government ethics at the Markkula Center for Applied Ethics at Santa Clara University. They are among the co-authors of “Voting Ethically: A Guide for American Voters”, from which parts of this piece were adapted.

scroll to top