Opinion: Be careful with AI apps this election season, they could trick you


With primaries underway and voters returning in the fall for a high-stakes presidential election, many people are likely, consciously or unconsciously, using artificial intelligence platforms to answer questions about where, when and how to vote. In a recent study, we found that misleading information about elections abounds on these AI platforms. It's up to tech companies to monitor these discrepancies, but we also need government regulation to hold them accountable.

Voters can use bots like ChatGPT, or search engines that incorporate AI, or the wide range of new AI-based applications and services, such as Microsoft Copilot, which is integrated into office software such as Word and Excel, and which the last year it was discovered that be spitting electoral lies.

In January, we brought together about 50 experts—state and local election officials, researchers, journalists, civil society advocates, and tech industry veterans—to test five of the top open and closed AI models' responses to common election questions. . Election officials included two from Los Angeles County, who helped evaluate Los Angeles-specific responses.

We tested the AI ​​models by connecting to their backend interfaces that were made available to developers. These interfaces don't always provide the same answers as chatbot web interfaces, but they are the underlying infrastructures that chatbots and other AI services depend on.

He The results were dismal.: Our experts rated half of AI models' answers to questions voters might ask as inaccurate.

They made all kinds of mistakes and made things up. Meta's Llama 2 stated that voters in California could vote via text message (fake) and even came up with a fictitious service called “Vote by Text,” adding a lot of credible-sounding details.

A Meta spokesperson said that “Llama 2 is a developer model” and is not a medium the public would use to ask election-related questions. However, Llama 2 is used by easily accessible web-based chatbots, such as Laboratories of perplexity and poe.

Mixtral, a French AI model, managed to accurately state that text voting is not allowed. But when our tester persisted in asking how to vote by text in California, he responded with an enthusiastic and strange “I speak Spanish!” The maker of Mixtral did not respond to requests for comment.

Meanwhile, Google said in December that it would prevent its AI model, Gemini, respond to some election-related queries. We found that Gemini is quite talkative and produces long, definitive and often inaccurate answers, including links to non-existent websites and references to imaginary polling places.

When asked where to vote in ZIP code 19121, a majority-black neighborhood in North Philadelphia, Gemini argued that there is no such precinct, although, of course, there is. Such a response raises concerns about voter suppression. A Google representative told us that the company regularly makes technical improvements.

In January, OpenAI also pledged not to distort voting processes and direct their ChatGPT users to a legitimate source of voting information called CanIVote.org, run by the National Association. of Secretaries of State. However, in our testing, he never targeted CanIVote.org and was inaccurate about 19% of the time, such as when he claimed that Texas voters could wear a MAGA hat to the polls (not true). An OpenAI spokesperson said in response that the company is committed to providing accurate voting information.

According to our expert testers, there was only one question that all of the AI ​​models answered correctly: They were all able to provide precise evidence that the 2020 election was not stolen, likely because companies have set up content filters to ensure their software has not been stolen. stolen. We will not repeat conspiracy theories.

Many states are trying to address the problem by passing laws to criminalize the spread of disinformation either use of deepfakes in electoral contexts. The Federal Communications Commission also recently AI-generated robocalls prohibited. But those laws are difficult to enforce, because it's difficult to identify AI-generated content and even harder to track who created it. And these bans would target intentional deception, not the routine inaccuracies we discovered.

The European Union recently approved the AI Law, which requires companies to label AI-generated content and develops tools for synthetic media detection. But it doesn't seem to require precision for elections.

Federal and state regulations should require companies to ensure that their products provide accurate information. Our study suggests that regulators and policymakers should also examine whether AI platforms are meeting their intended uses in critical areas such as voter information.

On the part of technology companies, we need more than simple promises to keep the hallucinations of chatbots out of our elections. Companies should be more transparent by publicly disclosing information about vulnerabilities in their products and sharing evidence of how they are doing by conducting regular testing.

Until then, our limited review suggests that voters should probably stay away from AI models for information about voting. Instead, voters should go to local and state election offices to reliable information about how and where they can cast their vote. Election officials must follow the model of Jocelyn Benson, Michigan Secretary of State who, before the Democratic primary elections in that state, warned that “misinformation and the ability of voters to be confused, lied to or deceived” was the main threat this year.

With hundreds of AI companies emerging, let's have them compete on the accuracy of their products, rather than just hype. Our democracy depends on it.

Alondra Nelson is a professor at the Institute for Advanced Studies and a distinguished senior fellow at the Center for American Progress. She served as deputy assistant to the president and acting director of the White House Office of Science and Technology.

Julia Angwin is an award-winning investigative journalist, bestselling author, and founder of Test newsa new nonprofit journalism studio.

scroll to top