Can AI make us less gullible or is it a conspiracy?

AI chatbots may have a hard time hallucinating made-up information, but new research has shown that they could be useful in countering unfounded and hallucinatory ideas in human minds. Scientists from MIT Sloan and Cornell University have published a paper in Science claiming that conversing with a chatbot powered by a large language model (LLM) reduces belief in conspiracies by about 20%.

To see how an AI chatbot might affect conspiracy thinking, the scientist arranged for 2,190 participants to discuss conspiracy theories with a chatbot running OpenAI’s GPT-4 Turbo model. Participants were asked to describe a conspiracy theory they found credible, including the reasons and evidence they believed supported it. The chatbot, which was asked to be persuasive, provided responses tailored to these details. As they spoke to the chatbot, it provided personalized counterarguments based on the participants’ input. The study addressed the perennial problem of AI hallucinations by having a professional fact-checker assess 128 claims made by the chatbot during the study. The claims were 99.2% accurate, which the researchers said was due to the extensive online documentation of the conspiracy theories represented in the model’s training data.

The idea behind turning to AI to debunk conspiracy theories was that its deep stores of information and adaptive conversational approaches could reach people by personalizing the approach. According to follow-up assessments conducted ten days and two months after the first conversation, it worked. Most participants had reduced belief in the conspiracy theories they had espoused “from classic conspiracies involving the assassination of John F. Kennedy, aliens, and the Illuminati, to those related to current events such as COVID-19 and the 2020 U.S. presidential election,” the researchers found.

Fun with Factbot

The results came as a real surprise to researchers, who had hypothesized that people are largely unreceptive to evidence-based arguments debunking conspiracy theories. Instead, it shows that a well-designed AI chatbot can effectively present counterarguments, leading to a measurable shift in beliefs. They concluded that AI tools could be a boon to combating misinformation, though caution is needed because they could also further mislead people with misinformation.

The study supports the value of projects with similar goals. For example, fact-checking site Snopes recently launched an AI tool called FactBot to help people determine whether something they’ve heard is real or not. FactBot uses Snopes’ archive and generative AI to answer questions without having to sift through articles using more traditional search methods. The Washington Post created Climate Answers to clear up confusion around issues related to climate change, drawing on its climate journalism to directly answer questions on the topic.

“Many people who strongly believe in conspiracy beliefs seemingly incompatible with the facts may change their minds when presented with compelling evidence. From a theoretical perspective, this paints a surprisingly optimistic picture of human reasoning: Conspiracy rabbit holes can have a way out,” the researchers wrote. “Practically, by demonstrating the persuasive power of LLMs, our findings emphasize both the potential positive impacts of generative AI when deployed responsibly and the pressing importance of minimizing opportunities for this technology to be used irresponsibly.”

You may also like

  • This AI-powered robot will check for you if Bigfoot is real
  • This AI chatbot will answer all your questions about climate change
  • Character.ai lets you talk to your favorite (synthetic) people on the phone, which is not at all strange
scroll to top