AI is fueling disinformation attacks against voters, especially in communities of color.


As the general election campaign begins in earnest, we can expect disinformation attacks targeting voters, especially in communities of color. This has happened before: in 2016, for example, Russia's disinformation programs focused on African Americanscreating Instagram and Twitter accounts posing as black voices and producing fake news websites such as blacktivist.info, blacktolive.org, and blacksoul.us.

Technological advances will make these efforts more difficult to recognize. Imagine those same fake accounts and websites featuring hyper-realistic videos and images intended to sow racial division and mislead people about their right to vote. With the advent of generative artificial intelligence, this is possible at little or no cost, fueling the type of misinformation that has always targeted communities of color.

It's a problem for candidates, election offices and voter outreach groups in the coming months. But ultimately, voters themselves will have to figure out what is real or fake news, authentic or AI-generated.

For immigrants and communities of color, who often face language barriers, distrust democratic systems, and lack access to technology, the challenge is likely more significant. Across the country, and especially in states like California with large communities of immigrants and people with limited English proficiency, the government needs to help these groups identify and avoid misinformation.

Asian Americans and Latinos are particularly vulnerable. About two-thirds of the Asian American and Pacific Islander population They are immigrants, and a Pew Research Center report states that “[86%] of Asian immigrants ages 5 and older say they speak a language other than English at home.” The same dynamic applies to Latinos: only 38% of foreign-born Latinos in the US The population reports having command of the English language.

Targeting non-English speaking communities has several advantages for those who spread misinformation. These groups are often isolated from mainstream news sources that have the greatest resources to debunk deepfakes and other misinformation, preferring online interaction in their native languages, where moderation and fact-checking are less prevalent. Forty-six percent of Latinos in the US use WhatsApp, while many Asian Americans prefer WeChat. Wired Magazine reported that the platform “is used by millions of Chinese Americans and people with friends, family or businesses in China, including as a political organizing tool.”

Disinformation targeting immigrant communities is poorly understood and difficult to track and counter, but it is becoming easier to create. In the past, producing fake content in languages ​​other than English was labor-intensive and often of low quality. Now, AI tools can create disinformation in hard-to-trace language at lightning speed and without the vulnerabilities and scale issues posed by human limitations. Despite this, much Research on disinformation and disinformation. It focuses on the uses of the English language.

Attempts to target communities of color and non-English speaking communities with misinformation are aided by the heavy reliance of many immigrants on their cell phones to access the Internet. Mobile User interfaces are particularly vulnerable. to misinformation because many desktop design and branding elements are minimized in favor of content on smaller screens. Given that 13% of Latinos and 12% of African Americans rely on mobile devices to access broadband, in contrast to 4% of white smartphone owners, they are more likely to receive (and share) false information .

Previous efforts by social media companies to counter voter misinformation have failed. Meta February Announcement That it would flag AI-generated images on Facebook, Instagram and Threads is a positive, but minor, step toward stopping AI-generated misinformation, especially for ethnic and immigrant communities who may know little about its effects. It is clear that a stronger government response is needed.

The California Initiative for Technology and Democracy, or CITED, where we are on the board of directors, will soon unveil a legislative package that would require broader transparency for AI generative content, ensuring that social media users know what videos, audio and images are. made by AI tools. The bills would also require AI-assisted labeling of political misinformation on social media, ban the use of the technology in campaign ads close to an election, and restrict anonymous trolls and bots.

Additionally, CITED plans to host a series of community forums in California with partner organizations rooted in their regions. The groups will speak directly to community leaders of color, union leaders, local elected officials, and other trusted messengers about the dangers of AI-generated misinformation likely to be circulating this election season.

The hope is that this information will spread at the community level, making the state's voters more aware and skeptical of false or misleading content, building confidence in the electoral process, election results, and our democracy.

Bill Wong is a campaign strategist and author of “Better to Win: Hardball Lessons in Leadership, Influence, & the Craft of Politics.” Mindy Romero is a political sociologist and director of the Center for Inclusive Democracy at the USC Price School of Public Policy.

scroll to top