Microsoft has introduced a new tool that aims to prevent AI models from generating content that is not factually correct, more commonly known as hallucinations.
The new correction feature builds on Microsoft’s existing “grounding detection,” which essentially cross-references AI text with a supporting document entered by the user. The tool will be available as part of Microsoft’s Azure AI Safety API and can be used with any text-generating AI model, such as OpenAI’s GPT-4o and Meta’s Llama.
The service will detect any potentially serious errors and then check whether they are true or not by comparing the text to a reliable source of information via a source document (i.e. uploaded transcripts). This means that users can tell the AI what it should consider true in the form of source documents.
A provisional measure
Experts warn that while the current state of the art may be helpful, it does not address the cause of hallucinations. AI does not actually “know” anything, it only predicts what will happen next based on the examples it is trained on.
“Empowering our customers to understand and take action against unsubstantiated content and delusions is critical, especially as the demand for trustworthiness and accuracy in AI-generated content continues to increase,” Microsoft said in its blog post.
“Building on our existing Groundedness Detection feature, this innovative capability enables Azure AI Content Safety to identify and remediate hallucinations in real time before they are detected by generative AI application users.”
The release, available in preview now, is part of Microsoft’s broader efforts to make AI more trustworthy. Generative AI has so far struggled to gain public trust, with deepfakes and misinformation damaging its image, so updated efforts to make the service safer will be welcome.
Also part of the updates is “Assessments,” a proactive risk assessment tool, as well as confidential inference. This will ensure that sensitive information remains secure and private during the inference process, which is when the model makes decisions and predictions based on new data.
Microsoft and other tech giants have invested heavily in AI technologies and infrastructure and are set to do so further, with a new $30 billion investment recently announced.