Judge blocks California law that targeted deepfake campaign ads


With deepfake video and audio making its way into political campaigns, California enacted its strictest restrictions yet in September: a law banning political ads within 120 days of an election that include misleading, digitally generated or altered content. unless the ads are labeled “manipulated.” “

On Wednesday, a federal judge temporarily blocked the law, saying it violated the First Amendment.

Other laws against misleading campaign ads remain in effect in California, including one that requires candidates and political action committees to disclose when ads use artificial intelligence to create or substantially alter content. But the preliminary injunction granted against Assembly Bill 2839 means there will be no broad ban against people using artificial intelligence to clone a candidate's image or voice and falsely portray them without revealing that the images or words are fake. .

The injunction was sought by Christopher Kohls, a conservative commentator who has created a series of fake videos lampooning Democrats, including the party's presidential candidate, Vice President Kamala Harris. Gov. Gavin Newsom cited one of those videos, which showed clips of Harris while a fake version of her voice talked about being the “ultimate diversity hire” and professed ignorance and incompetence, when he signed AB 2839, but the measure was actually introduced. in February, long before Harris' Kohls video went viral on X.

When asked on X about the ruling, Kohls said: “Freedom prevails! For now.”

Deepfake videos lampooning politicians, including one directed at Vice President Kamala Harris, have gone viral on social media.

(Darko Vojinovic/Associated Press)

The ruling by U.S. District Judge John A. Mendez illustrates the tension between efforts to protect against AI-powered forgery that could influence elections and the Bill of Rights' strong safeguards for political speech.

In granting a preliminary injunction, Mendez wrote: “When political speech and electoral politics are at stake, the First Amendment has almost unequivocally dictated that courts allow speech to flourish rather than supporting the State's attempt to stifle it. . [M]“The bulk of AB 2839 acts as a hammer rather than a scalpel, serving as a blunt tool that hinders humorous expression and unconstitutionally stifles the free and unfettered exchange of ideas that is so vital to American democratic debate.”

Robert Weissman, co-president of Public Citizen, responded: “The First Amendment should not tie our hands in addressing a serious, foreseeable and real threat to our democracy.”

Robert Weissman, CEO of Public Citizen, speaks at a press conference

Robert Weissman of the consumer advocacy organization Public Citizen says 20 other states have adopted laws similar to AB 2839, but there are key differences.

(Nick Wass/Associated Press)

Weissman said 20 states had adopted laws that followed the same core approach: requiring that ads that use AI to manipulate content be labeled as such. But AB 2839 had some unique elements that could have influenced Mendez's thinking, Weissman said, including the requirement that the disclosure be displayed as large as the largest text seen in the ad.

In his ruling, Méndez noted that the First Amendment also extends to false and misleading speech. Even on an issue as important as safeguarding elections, he wrote, lawmakers can only regulate expression through the least restrictive means.

AB 2839, which required political videos to continually display required disclosures about gerrymandering, did not use the least restrictive means to protect election integrity, Mendez wrote. A less restrictive approach would be “counter-speech,” he wrote, although he did not explain what that would entail.

Weissman responded: “Counter-speech is not an adequate remedy.” The problem with deepfakes is not that they make false claims or insinuations about a candidate, he said; “The problem is that they show the candidate saying or doing something that he didn't actually do.” Targeted candidates have the nearly impossible task of explaining that they didn't actually do or say those things, he said, which is considerably more difficult than countering a false accusation uttered by an opponent or leveled by a political action committee.

For the challenges created by deepfake ads, requiring disclosure of manipulation is not a perfect solution, he said. But it is the least restrictive remedy.

Liana Keesing of Issue One, a democracy advocacy group, said the creation of deepfakes is not necessarily the problem. “What matters is the amplification of that false and misleading content,” said Keesing, the group's campaign director.

Alix Fraser, tech reform director at Issue One, said the most important thing lawmakers can do is address how tech platforms are designed. “What are the barriers around that? There are basically none,” he said, adding: “That's the core problem as we see it.”

scroll to top