'Lack of trust': How deepfakes and AI could shake up the US election | News about the 2024 US elections


On January 21, Patricia Gingrich was about to sit down to dinner when her landline rang. The New Hampshire voter answered and heard a voice telling her not to vote in the upcoming presidential primary.

“As I was listening to it, I thought, God, that sounds like Joe Biden,” Gingrich told Al Jazeera. “But the fact that he was saying that, to save his vote, don't use it in the next election, he knew that Joe Biden would never say that.”

The voice may have sounded like that of the president of the United States, but it wasn't him: it was a deepfake, generated by artificial intelligence (AI).

Experts warn that deepfakes (audio, video or images created with artificial intelligence tools, intended to deceive) pose a high risk to American voters ahead of the November general election, not only by injecting false content into the race but by eroding public trust.

Gingrich said he wasn't fooled by Biden's deepfake, but fears it suppressed voter turnout. The message reached nearly 5,000 New Hampshire voters just days before the state's primary.

“This could be bad for people who aren't as informed about what's going on with Democrats,” said Gingrich, chairman of the Barrington Democratic Committee in Burlington, New Hampshire.

“If they really thought they shouldn't vote for something and Joe Biden told them not to, then maybe they wouldn't go to that vote.”

US President Joe Biden's voice was spoofed in a robocall sent to New Hampshire primary voters. [Leah Millis/Reuters]

Vulnerable online groups

Biden's call was not the only deepfake so far in the election cycle. Before canceling his presidential bid, Florida Governor Ron DeSantis' campaign shared a video containing artificial intelligence-generated images of Donald Trump hugging immunologist Anthony Fauci, two figures who clashed publicly during the COVID-19 pandemic.

And in September, a different robocall was sent to 300 voters expected to participate in the South Carolina Republican primary. This time, recipients heard an AI-generated voice imitating Senator Lindsey Graham and asking them who they were voting for.

The practice of altering or falsifying content, especially for political gain, has existed since the dawn of American politics. Even the country's first president, George Washington, had to deal with a series of “spurious letters” that appeared to show him questioning the cause of American independence.

But AI tools are now advanced enough to convincingly imitate people quickly and cheaply, raising the risk of misinformation.

A study published earlier this year by researchers at George Washington University predicted that by mid-2024, daily “AI attacks” would increase, posing a threat to the November general election.

The study's lead author, Neil Johnson, told Al Jazeera that the biggest risk comes not from the recent and obviously fake robocalls, which contained surprising messages, but rather from more convincing deepfakes.

“It will be nuanced images, doctored images, not completely false information because false information attracts the attention of disinformation fact-checkers,” Johnson said.

The study found that online communities are linked in a way that allows bad actors to send large amounts of manipulated media directly to the mainstream.

Communities in swing states could be especially vulnerable, as could parenting groups on platforms like Facebook.

“The role of parenting communities will be important,” Johnson said, pointing to the rapid spread of vaccine misinformation during the pandemic as an example.

“I think we will suddenly face a wave of [disinformation] — many things that are not false, are not false, but they expand the truth.”

Donald Trump next to the White House podium where Anthony Fauci speaks.
An AI-generated image posted by Ron DeSantis' campaign appeared to show Donald Trump, right, hugging Anthony Fauci, left. [Leah Millis/Reuters]

Eroding public trust

However, voters themselves are not the only targets of deepfakes. Larry Norden, senior director of the Government and Elections Program at the Brennan Center for Justice, has been working with election officials to help them detect false content.

For example, Norden said bad actors could use artificial intelligence tools to order poll workers to close a polling place prematurely, manipulating the sound of their boss's voice or sending a message apparently through a person's account. supervisor.

It's teaching poll workers to protect themselves by verifying the messages they receive.

Norden emphasized that bad actors can create misleading content without AI. “The thing about AI is that it makes it easier to do it at scale,” he said.

Last year, Norden illustrated the capabilities of AI by creating a fake video of himself for a presentation on the risks posed by the technology.

“It didn't take long,” Norden said, explaining that all he had to do was enter his previous television interviews into an app.

His avatar wasn't perfect (his face was a little blurry, his voice a little breathy), but Norden noted that AI tools are improving rapidly. “Since we recorded that, the technology has become more sophisticated and I think it's getting harder and harder to tell.”

Technology alone is not the problem. As deepfakes become more common, the public will become more aware of them and more skeptical of the content they consume.

That could erode public trust, as voters would be more likely to reject true information. Political figures could also abuse that skepticism for their own purposes.

Legal scholars have called this phenomenon the “liar's dividend”: Concern about deepfakes could make it easier for subjects of legitimate audio or video footage to claim that the recordings are fake.

Norden pointed to Access Hollywood audio that emerged before the 2016 election as an example. In the clip, then-candidate Trump is heard talking about his interactions with women: “You can do anything. Grab them by the pussy.”

The tape, which was very real, was seen as damaging to Trump's prospects among female voters. But if similar audio were leaked today, Norden said a candidate could easily call it fake. “It would be easier for the public to dismiss that kind of thing than it was a few years ago.”

Norden added: “One of the problems we have now in the United States is a lack of trust, and this can only make things worse.”

Steve Kramer stands in a courtroom, surrounded by a lawyer and a law enforcement officer.
Left-of-center Steve Kramer has been charged with 13 felony counts of voter suppression, as well as misdemeanors for his role in the New Hampshire robocall. [Steven Senne/AP Photo, pool]

What can be done with deepfakes?

While deepfakes are a growing concern in US elections, relatively few federal laws restrict their use. The Federal Election Commission (FEC) has yet to restrict deepfakes in elections and bills in Congress remain stalled.

Individual states are scrambling to fill the void. According to a legislation tracker published by consumer advocacy organization Public Citizen, 20 state laws have been enacted so far to regulate deepfakes in elections.

Several more bills have been passed (in Hawaii, Louisiana and New Hampshire) and are awaiting the governor's signature.

Norden said he wasn't surprised to see individual states take action before Congress. “States are supposed to be the laboratories of democracy, so it's becoming true again: States are acting first. “We all know that it is very difficult to get something passed in Congress,” he said.

Voters and political organizations are also taking action. After Gingrich received the phony call from Biden in New Hampshire, she joined a lawsuit, led by the League of Women Voters, seeking liability for the alleged hoax.

The source of the call turned out to be Steve Kramer, a political consultant who stated that his intention was to draw attention to the need to regulate AI in politics. Kramer also admitted to being behind the robocall in South Carolina, imitating Senator Graham.

Kramer came forward after NBC News revealed that he had commissioned a magician to use publicly available software to generate Biden's deepfake voice.

According to the lawsuit, the deepfake took less than 20 minutes to create and cost just $1.

Kramer, however, told CBS News that he received “$5 million of exposure” for his efforts, which he hoped would allow AI regulations to “develop themselves or at least start to pay for themselves.”

“My intention was to make a difference,” he said.

Paul Carpenter, a magician, appears to float a playing card between his two outstretched hands.
Paul Carpenter, a New Orleans magician, said he was hired to create a deepfake of President Biden's voice. [Matthew Hinton/AP Photo]

Potential to enforce existing laws.

But Kramer's case shows that existing laws can be used to limit deepfakes.

The Federal Communications Commission (FCC), for example, ruled (PDF) earlier this year that voice imitation software falls under the Telephone Consumer Protection Act of 1991 and is therefore illegal in the most circumstances.

The commission ultimately proposed a $6 million fine against Kramer for the illegal robocall.

The New Hampshire Department of Justice also charged Kramer with felony voter suppression and impersonating a candidate, which could result in up to seven years in prison. Kramer has pleaded not guilty. He did not respond to a request for comment from Al Jazeera.

Norden said it's significant that none of the laws Kramer is accused of are designed specifically for deepfakes. “The criminal charges against him have nothing to do with AI,” he said. “Those laws exist regardless of the technology that is used.”

However, those laws are not as easy to apply to bad actors who are unidentifiable or located outside the US.

“We know from intelligence agencies that they are already seeing China and Russia experimenting with these tools. And they expect them to be used,” Norden said. “In that sense, this problem is not going to be solved through legislation.”

Both Norden and Johnson believe the lack of regulation makes it more important for voters to educate themselves about deepfakes and learn how to find accurate information.

As for Gingrich, he said he knows manipulative deepfakes will become more ubiquitous. She also believes voters should educate themselves about the risk.

His message to voters? “I would tell people to make sure they know they can vote.”

scroll to top