AI can rescue people from conspiracy theory rabbit holes


About half of Americans subscribe to some kind of conspiracy theory, and their peers have had little success in drawing them out of their burrows.

Maybe they could learn a thing or two from an AI-powered chatbot.

In a series of experiments, the artificial chatbot managed to make more than a quarter of people feel unsure about their most cherished conspiracy belief. The average conversation lasted less than eight and a half minutes.

The results were published Thursday in the journal Science.

The failure of facts to convince people that we really landed on the moon, that al-Qaeda really was responsible for the 9/11 attacks, and that President Biden really won the 2020 election, among other things, has fueled anxiety about a post-truth era that favors personal beliefs over objective evidence.

“People who believe in conspiracy theories rarely, if ever, change their minds,” said study leader Thomas Costello, a psychologist at American University who researches political and social beliefs. “In some sense, it’s better to believe that there’s a secret society that controls everything than to believe that entropy and chaos rule.”

But the study suggests the problem isn't the persuasive power of facts: It's our inability to assemble the right combination of facts to counter someone's specific reasons for skepticism.

Costello and his colleagues attributed the chatbot's success to the detailed, personalized pitches it prepared for each of the 2,190 study participants it interacted with.

For example, a person who doubted that the twin towers could have been brought down by planes because jet fuel does not burn hot enough to melt steel was informed that the fuel reaches temperatures of up to 1,832 degrees, enough to cause steel to lose its structural integrity and cause a collapse.

One person who did not believe Lee Harvey Oswald had the skills to assassinate President John F. Kennedy was told that Oswald had been a crack shot in the Marines and would have had little trouble getting off an accurate shot from about 100 yards away.

And one person who believed Princess Diana was murdered so Prince Charles could remarry has been reminded of the eight-year gap between Diana's fatal car crash and the future king's second wedding, undermining the argument that the two events were linked.

The findings suggest that “any kind of belief that people have that is not based on good evidence could be modified,” said study co-author Gordon Pennycook, a cognitive psychologist at Cornell University.

“It’s really reassuring to know that evidence matters,” she said.

Researchers began by asking Americans to rate the extent to which they subscribed to 15 common conspiracy theories, including that the virus responsible for COVID-19 was created by the Chinese government and that the U.S. military has been hiding evidence of a UFO landing in Roswell, New Mexico. After completing an unrelated task, participants were asked to describe a conspiracy theory they found particularly compelling and explain why they believed it.

The request prompted 72% of them to share their feelings about a conspiracy theory. Among this group, 60% were randomly assigned to discuss it with the large language model GPT-4 Turbo.

A screenshot of the chatbot used by researchers to test whether AI could help change people's minds about conspiracy theories.

(Thomas H. Costello)

The conversations began with the chatbot summarizing the human's description of the conspiracy theory. The human then rated the degree to which they agreed with the summary on a scale of 0 to 100.

From there, the chatbot set about proving that there was nothing fishy going on. To make sure it wasn’t stretching the truth to be more persuasive, the researchers hired a professional fact-checker to evaluate 128 of the bot’s claims about a variety of conspiracies. One was judged to be misleading, and the rest were true.

The robot also showed its charm. In one instance, it praised a participant for “critically examining historical events” and reminded him that “it is vital to distinguish between what could theoretically be possible and what is supported by evidence.”

Each conversation included three rounds of evidence from the chatbot, followed by a response from the human. (You can try it out for yourself here.) Afterward, participants reviewed their condensed conspiracy statements. Their agreement rates dropped by an average of 21%.

In 27% of cases, the drop was large enough that researchers could say the person “lost confidence in their belief in the conspiracy.”

Meanwhile, 40% of participants who served as controls were also given summaries of their preferred conspiracy theory and rated them on a scale of 0 to 100. They then spoke to the chatbot about neutral topics, such as the U.S. medical system or the relative merits of cats and dogs. When these people were asked to reconsider their conspiracy theory summaries, their ratings dropped by just 1%, on average.

The researchers checked back with the participants 10 days and 2 months later to see if the effects had worn off. They hadn't.

The team repeated the experiment with another group, asking participants about their beliefs in conspiracy theories in a more indirect way. This time, when they discussed the chosen theory with the bot, their rating dropped by 19.4%, compared with a 2.9% drop for those who talked about something else.

The conversations “really fundamentally changed people’s minds,” said co-author David Rand, a computational social scientist at MIT who studies how people make decisions.

“The effect didn’t vary significantly depending on which conspiracy was named and discussed,” Rand said. “It worked for classic conspiracies like the JFK assassination and the moon landing hoaxes and the Illuminati, things like that. And it also worked for modern, more politicized conspiracies, like those involving the 2020 election fraud or COVID-19.”

Moreover, when the AI ​​chatbot asked them about one conspiracy theory, people became more skeptical of the others. After the conversations, their affinity for the 15 most common theories decreased significantly more than in the control group.

“That was making people less conspiracy-minded overall,” Rand said. “It also increased their intentions to do things like ignore or block social media accounts that shared conspiracy theories or, you know, argue with people who supported those conspiracy theories.”

In another encouraging sign, the robot was unable to dissuade people from believing in conspiracies that were actually true, such as the CIA covert operation. MK-Ultra Project that used unwitting subjects to test whether drugs, torture or brainwashing could improve interrogations. In some cases, conversations via chatbots made people believe these conspiracies even more.

“It wasn’t about mind control, it was about forcing people to do what they wanted,” Rand said. “Basically, it was about following the facts.”

Researchers not involved in the study called it a welcome advance.

In an essay that accompanied the study, the psychologist Bence Bago from Tilberg University in the Netherlands and cognitive psychologist Jean-François Bonnefon A researcher at the Toulouse School of Economics in France said the experiments show that “a scalable intervention to recalibrate erroneous beliefs may be within our reach.”

But they also raised several concerns, including whether it would work with a conspiracy theory that is so new that there isn't much data for an AI bot to use.

Researchers first tested this technique the week after the July 13 assassination attempt on former President Trump. After helping the AI ​​program find credible information about the attack, they found that talking to the chatbot reduced people’s belief in conspiracy theories related to the assassination by 6 or 7 percentage points — which Costello called “a remarkable effect.”

Bago and Bonnefon also questioned whether conspiracy theorists would be willing to work with a bot. Rand said she did not think it was an insurmountable problem.

“One advantage of this is that conspiracy theorists are often not ashamed of their beliefs,” he said. “You could imagine just going to conspiracy forums and inviting people to do their own research by talking to the chatbot.”

Rand also suggested buying ads on search engines so that when someone types a query about, say, the “deep state,” they see an invitation to discuss it with an AI chatbot.

Robbie Sutton, a social psychologist at the University of Kent in England who studies why people adopt conspiracy beliefs, called the new work “an important step forward” but noted that most people in the study persisted in their beliefs despite receiving “high-quality factual refutations” from a “highly competent and respectful chatbot.”

“Seen this way, there is more resistance than open-mindedness,” he said.

Sutton added that the findings don't shed much light on what draws people to conspiracy theories in the first place.

“Interventions like this are basically an ambulance at the bottom of the cliff,” he said. “We need to focus more of our efforts on what happens at the top of the cliff.”

scroll to top