OpenAI’s new AI audio transcription tool Whisper is having frequent “AI hallucinations”, despite its rapid adoption in “high-risk industries” like healthcare, AP News reports.
AI hallucination is where a large language model (LLM) spots patterns that don’t exist, creating outputs that can be nonsensical or downright ridiculous.
Whisper allegedly has invented text that includes “racial commentary, violent rhetoric and even imagined medical treatments” according to the experts who spoke to AP News.
Though it is widely accepted that AI transcription tools will make at least some typos, the engineers and researchers said they had never seen another AI-powered transcription tool hallucinate to the same extent as Whisper.
A University of Michigan researcher claimed he found hallucinations in eight out of every 10 audio transcriptions he studied.
Microsoft has publicly stated that the tool is not intended for high-risk use cases, but the reports come as many healthcare providers have begun adopting Whisper for transcription.
AP News alleges that over 30,000 clinicians and 40 health systems, such as the Mankato Clinic in Minnesota and the Children’s Hospital Los Angeles, have started using a Whisper-based tool for transcription.
Alondra Nelson, professor of social science at Princeton, told AP that these types of mistakes could have “really grave consequences” in a medical setting.
“Nobody wants a misdiagnosis,” she told the publication. “There should be a higher bar.”
William Saunders, a research engineer and former OpenAI employee said: “It’s problematic if you put this out there and people are overconfident about what it can do and integrate it into all these other systems.”
Recommended by Our Editors
But OpenAI certainly isn’t the only tech giant whose products have been accused of creating hallucinations.
Google’s AI Overviews, a feature that provides AI pop-up summaries for websites, was caught advising one X user to add non-toxic glues to their pizza to help the ingredients stick together.
Apple has also acknowledged the possibility of AI hallucinations being an issue with its future products.
In an interview with The Washington Post, Apple CEO Steve Cook admitted that false results and AI hallucinations could be an issue with Apple Intelligence, Apple’s upcoming suite of generative AI tools.
Get Our Best Stories!
Sign up for What’s New Now to get our top stories delivered to your inbox every morning.
This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of Use and Privacy Policy. You may unsubscribe from the newsletters at any time.