×
Hallucination rates soar in new AI models, undermining real-world use
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Recent “reasoning upgrades” to AI chatbots have unexpectedly worsened their hallucination problems, highlighting the persistent challenge of making large language models reliable. Testing reveals that newer models from leading companies like OpenAI and DeepSeek actually produce more factual errors than their predecessors, raising fundamental questions about whether AI systems can ever fully overcome their tendency to present false information as truth. This development signals a critical limitation for industries hoping to deploy AI for research, legal work, and customer service.

The big picture: OpenAI’s technical evaluation reveals its newest models exhibit dramatically higher hallucination rates than previous versions, contradicting expectations that AI systems would improve with each iteration.

  • OpenAI’s o3 model hallucinated 33 percent of the time when summarizing facts about people, while the o4-mini model performed even worse at 48 percent—significantly higher than the previous o1 model’s 16 percent rate.
  • This regression isn’t isolated to OpenAI, as models from other developers like DeepSeek have shown similar double-digit increases in hallucination rates.

Why this matters: Persistent hallucination problems threaten to derail critical applications where factual accuracy is essential.

  • Research assistants, paralegal tools, and customer service bots all become actively harmful when they confidently present false information as fact.
  • These limitations may fundamentally constrain how AI can be safely deployed in high-stakes environments.

The terminology gap: “Hallucination” covers a broader range of AI errors than many realize.

  • Beyond simply inventing facts, hallucinations include providing factually accurate but irrelevant answers or failing to follow instructions.
  • Understanding these distinctions helps clarify the full scope of reliability challenges facing current AI systems.

What they’re saying: Experts suggest we may need to significantly limit our expectations of what AI chatbots can reliably do.

  • Some recommend only using these models for tasks where fact-checking the AI’s answer would still be faster than conducting the research yourself.
  • Other experts propose a more conservative approach, suggesting users should “completely avoid relying on AI chatbots to provide factual information.”

The bottom line: Despite technological advancements, the AI industry appears to be confronting a persistent limitation that may require fundamental rethinking of how these systems are designed, trained, and deployed.

AI hallucinations are getting worse – and they're here to stay

Recent News

Hugging Face launches AI agent that navigates the web like a human

Computer assistants enable hands-free navigation of websites by controlling browsers to complete tasks like finding directions and booking tickets through natural language commands.

xAI’s ‘Colossus’ supercomputer faces backlash over health and permit violations

Musk's data center is pumping pollutants into a majority-Black Memphis neighborhood, creating environmental justice concerns as residents report health impacts.

Hallucination rates soar in new AI models, undermining real-world use

Advanced reasoning capabilities in newer AI models have paradoxically increased their tendency to generate false information, calling into question whether hallucinations can ever be fully eliminated.