OpenAI proudly introduced ChatGPT search in October as the next stage for search engines. The company boasted that the new feature combined ChatGPT's conversational abilities with the best web search tools, offering real-time information in a more useful form than any list of links. According to a recent review by Columbia University's Tow Center for Digital Journalism, that celebration may have been premature. The report found that ChatGPT has a somewhat lassie-faire attitude toward accuracy, attribution, and ground truth when searching for news.
What's especially notable is that the issues arise regardless of whether a post blocks OpenAI web trackers or has an official licensing agreement with OpenAI for its content. The study tested 200 quotes from 20 publications and asked ChatGPT to obtain them. The results were everywhere.
Sometimes the chatbot got it right. Other times, he attributed quotes to the wrong outlet or simply made up a source. OpenAI partners, including The Wall Street Journal, The Atlantic, and Axel Springer and Meredith's posts sometimes performed better, but not consistently.
Going for accuracy when asking ChatGPT about news is not what OpenAI or its partners want. The deals were announced as a way for OpenAI to support journalism while improving the accuracy of ChatGPT. When ChatGPT turned to politicalposted by Axel Springer, in the case of dating, the person speaking was often not the person cited by the chatbot.
AI news to lose
The short answer to the problem is simply ChatGPT's method of finding and digesting information. The web crawlers that ChatGPT uses to access data may work perfectly, but the AI model underlying ChatGPT can still make mistakes and boggle the mind. Licensed access to content does not change that basic fact.
Of course, if a post blocks web trackers, ChatGPT can go from bloodhound to wolf in sheep's clothing with precision. Media that uses robots.txt files to keep ChatGPT away from its content, such as The New York Times, let the AI flounder and fabricate sources instead of saying it doesn't have an answer for you. More than a third of the responses in the report fit this description. That's more than a small coding fix. Arguably the worst thing is that if ChatGPT couldn't access legitimate sources, it would turn to places where the same content was published without permission, perpetuating plagiarism.
Ultimately, AI misattribution of quotes is not as important as the implications for journalism and AI tools like ChatGPT. OpenAI wants ChatGPT search to be the place people go for fast, reliable answers, linked and cited appropriately. If it can't deliver results, it undermines trust in both AI and the journalism it summarizes. For OpenAI partners, the revenue from your licensing deal might not make up for the traffic lost due to unreliable links and citations.
So while searching ChatGPT can be a great help in many activities, be sure to check out those links if you want to make sure the AI isn't hallucinating answers from the Internet.