Following criticism over the accuracy of its artificial intelligence tools, Microsoft is now warning users not to place too much trust in its services.
The company has submitted an updated Service Agreement stating that its AI should be considered a guide rather than a replacement for professional advice.
The updated agreement, which will take effect at the end of next month, also contains warnings about its health bots, amid concerns that users may place too much trust in the advice provided.
Microsoft says AI is not a substitute for professionals
Microsoft’s revised terms specifically address the limitations of its assistive AI: “AI services are not designed, intended, or intended to be used as a substitute for professional advice.”
The company added that Health Bots “are not designed or intended to be a substitute for professional medical advice or for use in the diagnosis, cure, mitigation, prevention or treatment of disease or other conditions.”
The updates reflect the increased adoption of AI tools in recent months following the introduction of tools such as ChatGPT and subsequent criticism over accuracy, data security and privacy.
The agreement also reiterates that Copilot AI Experiences, governed by Bing’s Terms of Use, should not be used to extract data through methods such as scraping or harvesting, unless expressly permitted by Microsoft.
Additionally, the updates impose stricter rules on reverse engineering of AI models and strengthen other protections: “You may not use AI Services to discover any underlying components of models, algorithms, and systems.”
Microsoft also prohibits using its AI data to build or train other AI services.
While updates have been made to other Microsoft services, the revisions to its AI terms are a sign that the company is responding to liability concerns and more clearly managing user expectations. They also serve as a gentle reminder that AI technologies are unlikely to replace humans anytime soon.