×
AI experts predict human-level artificial intelligence by 2047
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

A new survey of 2,778 AI experts predicts human-level artificial intelligence could arrive by 2047, marking a dramatic 13-year acceleration from previous estimates. The research, published in the Journal of Artificial Intelligence Research by AI Impacts and the universities of Oxford and Bonn, represents the largest expert survey to date on AI timelines and reveals growing urgency around both capabilities and safety concerns.

What you should know: The timeline for advanced AI has compressed significantly, with experts now giving a 50% probability that systems capable of performing all tasks better and more cheaply than humans will be feasible by 2047.

  • A 10% probability was placed on such systems arriving as early as 2027.
  • Within the decade, experts expect AI systems capable of autonomously fine-tuning large language models (the technology behind ChatGPT), building complex online services like payment-processing websites, or writing songs indistinguishable from hit artists.
  • Despite optimism about technical capabilities, full automation of all occupations wasn’t expected until 2116, highlighting a long lag between feasibility and societal transformation.

The big picture: AI experts are simultaneously excited and anxious about the technology’s trajectory, with widespread concerns about near-term risks outpacing confidence in long-term safety measures.

  • Around 68% said positive outcomes from advanced AI were more likely than negative ones, but 48% of these optimists still assigned at least a 5% chance of catastrophic outcomes.
  • Between 38% and 51% of respondents estimated at least a 10% probability that advanced AI could cause human extinction or permanent loss of control.

Key concerns: Experts identified misinformation and manipulation as the most pressing near-term risks, with authoritarian misuse and economic inequality following closely behind.

  • 86% highlighted misinformation, such as deepfakes (realistic but fake videos or audio), as an area of “substantial” or “extreme” concern.
  • 79% pointed to the manipulation of public opinion as a major risk.
  • 73% cited authoritarian misuse of AI technology.
  • 71% warned that AI could widen global economic disparities.

Transparency challenges: Researchers expressed deep skepticism about AI system interpretability in the near future.

  • Only 5% believed that by 2028, leading AI models would truthfully explain their reasoning in ways humans can understand.
  • This opacity concern compounds worries about oversight and control of increasingly powerful systems.

Growing safety urgency: The survey reveals a sharp increase in experts prioritizing AI safety research compared to previous years.

  • More than 70% of respondents said AI safety research deserves greater priority, up from 49% in 2016.
  • However, experts remain deeply divided on what alignment and oversight should look like in practice.

Institutional response: The findings align with broader warnings from major organizations about the gap between AI progress and governance frameworks.

  • The Stanford HAI AI Index 2025 noted that governance and interpretability lag behind capability growth despite record investment levels.
  • The World Economic Forum’s Global Future Council on Artificial General Intelligence is calling for early frameworks to manage cross-border risk.
  • A PYMNTS survey found that 70% of executives said AI has increased their exposure to digital risk, even as it improved productivity, with only 39% of firms surveyed saying they have a formal framework for AI governance.
AI Experts Predict Human-Level Intelligence Could Arrive by 2047

Recent News

Law firm pays $55K after AI created fake legal citations

The lawyer initially denied using AI before withdrawing the fabricated filing.

AI experts predict human-level artificial intelligence by 2047

Half of experts fear extinction-level risks despite overall optimism about AI's future.

OpenAI acquires Sky to bring Mac control to ChatGPT

Natural language commands could replace clicks and taps across Mac applications entirely.