×
AI risks to patients prompt researchers to urge medical caution
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

AI-driven healthcare prediction models risk creating harmful “self-fulfilling prophecies” when accuracy is prioritized over patient outcomes, according to new research from the Netherlands. The study reveals that even highly accurate AI systems can inadvertently worsen health disparities if they’re trained on data reflecting historical treatment biases, potentially leading to reduced care for already marginalized patients. This warning comes at a critical time as the NHS increasingly adopts AI for diagnostics and as the UK government pursues its “AI superpower” ambitions.

The big picture: Researchers demonstrate that AI outcome prediction models (OPMs) can lead to patient harm even when they achieve high accuracy scores after deployment.

  • OPMs use patient health histories and lifestyle information to help clinicians evaluate treatment options, but mathematical modeling shows they can reinforce existing healthcare disparities.
  • The study, published in data-science journal Patterns, calls for a fundamental shift in AI healthcare development priorities, moving away from predictive performance toward improvements in treatment approaches and patient outcomes.

Real-world implications: Professor Ewen Harrison illustrated the potential harms with a practical example of how prediction can become self-fulfilling.

  • An AI system predicting poor recovery prospects for certain patients might lead clinicians to provide less rehabilitation support, ultimately causing “a slower recovery, more pain and reduced mobility.”
  • This feedback loop could particularly impact patients from groups that have historically received inequitable healthcare based on race, gender, or socioeconomic factors.

Why human oversight matters: The research emphasizes that human clinical judgment remains essential when implementing AI-driven healthcare systems.

  • Researchers highlighted the “inherent importance” of applying “human reasoning” to algorithmic predictions to prevent reinforcing biases.
  • Dr. Catherine Menon warned that without proper oversight, these models risk “worsening outcomes for patients who have typically been historically discriminated against in medical settings.”

Current applications: AI is already being used throughout England’s National Health Service for various diagnostic functions.

  • The technology currently assists clinicians in reading X-rays and CT scans and helps accelerate stroke diagnoses.
  • Prime Minister Sir Keir Starmer has positioned AI as a potential solution to NHS waiting lists as part of his broader vision to establish the UK as an “AI superpower.”
Why AI could end up harming patients, as researchers urge caution

Recent News

Scaling generative AI 4 ways from experiments to production

Organizations face significant hurdles when moving generative AI initiatives from experimentation to production-ready systems, with most falling short of deployment goals despite executive interest.

Google expands Gemini AI with 2 new plans, leak reveals

Google prepares to introduce multiple subscription tiers for Gemini, addressing the gap between its free and premium AI offerings.

AI discovers potential Alzheimer’s cause and treatment

AI identifies PHGDH gene as a direct cause of Alzheimer's disease beyond its role as a biomarker, offering a new understanding of spontaneous cases and potential treatment pathways.