Microsoft's New AI Correction Service: A Step Towards Reliable AI-Generated Content
Microsoft's new AI Correction service aims to enhance the reliability of AI-generated content by flagging and revising factually incorrect text. While this service can significantly improve the trustworthiness of AI outputs, experts caution that it does not address the root cause of AI hallucinations. Additionally, the service's effectiveness may be limited by its reliance on grounding documents and the potential for new issues to arise.
FUTURETOOLSHALLUCINATIONS
The AI Maker
4/14/20252 min read


Artificial Intelligence (AI) has revolutionized various industries, but it comes with its own set of challenges. One of the most significant issues is the tendency of AI to generate factually incorrect information, often referred to as "hallucinations." Microsoft has recently introduced a service called Correction, aimed at addressing this problem.
Correction is part of Microsoft's Azure AI Content Safety API and is currently available in preview. This service flags potentially erroneous AI-generated text and fact-checks it by comparing it with a source of truth, such as uploaded transcripts. It can be used with any text-generating AI model, including Meta's Llama and OpenAI's GPT-4.
The technology behind Correction involves utilizing both small and large language models to align AI outputs with grounding documents. This process is particularly beneficial in fields where accuracy is of paramount importance, such as medicine. Google's Vertex AI also introduced a similar feature this summer, allowing customers to ground models using data from third-party providers, their own datasets, or Google Search.
However, experts caution that these grounding approaches do not address the root cause of hallucinations. AI models hallucinate because they are statistical systems that identify patterns in a series of words and predict which words come next based on the countless examples they are trained on. As a result, a model's responses are not answers but predictions of how a question would be answered if it were present in the training set.
Microsoft's solution involves a pair of cross-referencing, copy-editor-esque meta models designed to highlight and rewrite hallucinations. A classifier model looks for possibly incorrect, fabricated, or irrelevant snippets of AI-generated text (hallucinations). If it detects hallucinations, the classifier ropes in a second model, a language model, that tries to correct for the hallucinations in accordance with specified grounding documents.
While Correction can significantly enhance the reliability and trustworthiness of AI-generated content, it is important to note that groundedness detection does not solve for accuracy but helps to align generative AI outputs with grounding documents. Some experts, like Os Keyes, a PhD candidate at the University of Washington, argue that trying to eliminate hallucinations from generative AI is like trying to eliminate hydrogen from water; it's an essential component of how the technology works.
Mike Cook, a lecturer at King's College London specializing in AI, adds that even if Correction works as advertised, it threatens to compound the trust and explainability issues around AI. The service might catch some errors, but it could also lull users into a false sense of security, thinking models are being truthful more often than is actually the case.
Moreover, Microsoft's bundling of Correction comes with a cynical business angle. The feature is free on its own, but the groundedness detection required to detect hallucinations for Correction to revise is only free up to 5,000 text records per month. It costs 38 cents per 1,000 text records after that. Microsoft is under pressure to prove to customers and shareholders that its AI is worth the investment.
In conclusion, while Microsoft's Correction service is a promising step towards more reliable AI-generated content, it is not a complete solution to the problem of AI hallucinations. As AI continues to evolve, it is crucial to remain vigilant and critical of its capabilities and limitations.
Cited: https://finance.yahoo.com/news/microsoft-claims-tool-correct-ai-140023855.html
Your Data, Your Insights
Unlock the power of your data effortlessly. Update it continuously. Automatically.
Answers
Sign up NOW
info at aimaker.com
© 2024. All rights reserved. Terms and Conditions | Privacy Policy