Microsoft's new AI tool wants to find and correct AI-generated text that is objectively incorrect

Microsoft has introduced a new tool that aims to prevent AI models from generating content that is not factually correct, more commonly known as hallucinations.

The new correction feature builds on Microsoft’s existing “grounding detection,” which essentially cross-references AI text with a supporting document entered by the user. The tool will be available as part of Microsoft’s Azure AI Safety API and can be used with any text-generating AI model, such as OpenAI’s GPT-4o and Meta’s Llama.

© 2024 Telegraph247. All rights reserved.
Designed and developed by Telegraph247