Artificial intelligence (AI) has the ability to generate, or rather “hallucinate,” alternative realities. This phenomenon, known as AI hallucinations, occurs when large language models — such as generative AI chatbots — perceive patterns or objects that are non-existent or imperceptible to human observers, producing meaningless or entirely inaccurate results, as described by IBM.
While AI generally provides accurate responses, there are instances where its algorithms generate outputs that are not based on training data, are misinterpreted by the model’s transformer, or fail to follow any identifiable pattern. In other words, the AI “hallucinates” a response.
How Often Do They Occur?
AI hallucinations can be compared with how humans perceive shapes in clouds or faces on the moon. These misinterpretations arise due to factors such as overfitting, bias, inaccuracies in the training data, or the complexity of the model.
Recent studies indicate that chatbots hallucinate between 3% and 27% of the time in simple tasks like summarizing news, with variation depending on the model and developer. Despite ongoing efforts by companies such as OpenAI and Google to reduce these errors, AI systems continue to generate unexpected and sometimes nonsensical results.
Hallucinations in Healthcare
To examine the implications of AI hallucinations in the medical field, a study by BHM Healthcare Solutions, a US-based company that provides behavioural health and medical review services, analysed recent data on AI-related errors in healthcare. The study explored how these hallucinations emerge, their consequences, and strategies to mitigate associated risks.
While several incidents demonstrate the potential impact of AI-induced hallucinations in healthcare, these cases remain isolated. The study concludes that learning from these occurrences can help implement safeguards to prevent similar mistakes in the future.
Some reported incidents include an AI system incorrectly flagging benign nodules as malignant in 12% of cases, leading to unnecessary surgical interventions. Another study identified instances where language-based AI models fabricated entire patient summaries, including non-existent symptoms and treatments. Similarly, an AI-powered drug interaction checker mistakenly identified false interactions, causing clinicians to avoid effective drug combinations unnecessarily.
Health Risks
The BHM study highlights that AI hallucinations in healthcare can lead to misdiagnoses and inappropriate treatments, directly compromising patient safety. Repeated errors may also erode trust in AI tools among healthcare professionals, reducing their willingness to adopt AI-driven decision support. Furthermore, mistakes attributed to AI hallucinations could result in malpractice lawsuits or increased regulatory scrutiny.
However, by acknowledging that AI hallucinations exist and identifying their root causes, healthcare organisations can implement proactive measures to minimise risks. Establishing robust training protocols, ensuring human oversight, and promoting transparency in AI-generated outputs can help mitigate these challenges and foster confidence in AI-driven healthcare solutions.
Are AI Hallucinations Beneficial?
Despite their dangers, AI hallucinations may have unexpected benefits. Some researchers have argued that AI’s ability to generate surprising or unconventional ideas can drive creativity.
Anand Bhushan, a senior IT architect and member of IBM’s Open Innovation Community, suggests that in a business or research setting, AI hallucinations can serve as a powerful tool for idea generation. AI’s ability to produce unconventional or unexpected outputs can inspire new thought processes, encouraging creativity and innovation.
Bhushan suggests that when AI generates novel or unconventional information, it can prompt users to explore topics more deeply, fostering critical thinking and deeper understanding.
In healthcare, AI hallucinations can contribute to creating dynamic and engaging user experiences in virtual environments and digital platforms. For example, chatbots and digital assistants can generate unique responses, personalising interactions and ultimately improving patient satisfaction, explained Bhushan.
A Tool for Discovery
The New York Times explored this issue in a report, noting that in the scientific world, AI hallucinations are proving to be extraordinarily useful. The article explains how incorrect or misleading results from AI models have helped researchers track cancer, design drugs, invent medical devices, and uncover meteorological phenomena by “dreaming up” new concepts to investigate.
In the report, Amy McGovern, a computer science and meteorology professor and the director of the US National Science Foundation AI Institute for Research on Trustworthy AI in Weather, Climate, and Coastal Oceanography, stated: “The public thinks it’s all bad, but in reality, it provides scientists with new ideas. It gives them the opportunity to explore ideas they might not otherwise have considered.”
The New York Times concluded that AI-generated unrealities are helping advance scientific research, from cancer tracking to drug development, designing medical devices, and studying weather patterns, and could contribute to future Nobel Prize-winning discoveries in medicine.
This story was translated from El Médico Interactivo using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication.
Source link : https://www.medscape.com/viewarticle/ai-hallucinations-are-changing-medicine-should-we-worry-2025a1000647?src=rss
Author :
Publish date : 2025-03-14 07:00:00
Copyright for syndicated content belongs to the linked Source.