×
AI assistants misrepresent news content in 45% of responses
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Leading AI assistants misrepresent news content in nearly half their responses, according to new research from the European Broadcasting Union and BBC that studied 3,000 responses across 14 languages. The findings reveal significant accuracy and sourcing problems that could undermine public trust as more people turn to AI assistants for news instead of traditional search engines.

What you should know: The study assessed major AI assistants including ChatGPT, Copilot, Gemini, and Perplexity for their ability to accurately report news content.

  • Overall, 45% of AI responses contained at least one significant issue, with 81% having some form of problem.
  • A third of responses showed serious sourcing errors such as missing, misleading, or incorrect attribution.
  • Issues of accuracy were found in 20% of responses, including outdated information.

The worst performer: Google’s Gemini showed the most serious sourcing problems among all AI assistants tested.

  • Some 72% of Gemini responses had significant sourcing issues, compared to below 25% for all other assistants.
  • Examples included Gemini incorrectly stating changes to a law on disposable vapes and ChatGPT reporting Pope Francis as the current Pope several months after his death.

Why this matters: AI assistants are increasingly replacing traditional search engines for news consumption, particularly among younger users.

  • Some 7% of all online news consumers and 15% of those aged under 25 use AI assistants to get their news, according to the Reuters Institute’s Digital News Report 2025.
  • “When people don’t know what to trust, they end up trusting nothing at all, and that can deter democratic participation,” said Jean Philip De Tender, media director at the European Broadcasting Union.

The research scope: Twenty-two public-service media organizations from 18 countries participated in the comprehensive study.

  • Countries included France, Germany, Spain, Ukraine, Britain, and the United States.
  • The study tested AI assistants across 14 languages for accuracy, sourcing, and ability to distinguish opinion versus fact.

Company responses: Major AI companies have previously acknowledged these issues and claim to be working on improvements.

  • OpenAI and Microsoft have said hallucinations—when AI generates incorrect or misleading information—are an issue they are seeking to resolve.
  • Gemini states on its website that it welcomes feedback to improve the platform.
  • Perplexity claims one of its “Deep Research” modes has 93.9% accuracy in terms of factuality.

What’s next: The report urges AI companies to be held accountable and improve how their assistants respond to news-related queries as their role in information distribution continues to grow.

AI assistants make widespread errors about the news, new research shows

Recent News

AI assistants misrepresent news content in 45% of responses

Young users increasingly turn to AI for news instead of traditional search engines.

Starship raises $50M to deploy 12,000 delivery robots across US cities

The suitcase-sized bots have already completed over 9 million deliveries across European cities and college campuses.

Idaho hunters fined after trusting AI for hunting regulation dates

AI pulled proposed dates from commission meetings rather than final published regulations.