×
The latest AI safety researcher to quit OpenAI says he’s ‘terrified’
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

OpenAI safety researcher Steven Adler left the company in mid-November 2024, citing grave concerns about the rapid pace of artificial intelligence development and the risks associated with the artificial general intelligence (AGI) race.

Key context: The departure comes amid growing scrutiny of OpenAI’s safety and ethics practices, particularly following the death of former researcher turned whistleblower Suchir Balaji.

  • Multiple whistleblowers have filed complaints with the SEC regarding allegedly restrictive nondisclosure agreements at OpenAI
  • The company faces increasing pressure over its approach to AI safety and development speed
  • Recent political developments include Trump’s promise to repeal Biden’s AI executive order, characterizing it as hindering innovation

Adler’s concerns: Having spent four years at OpenAI leading safety-related research and programs, Adler expressed deep apprehension about the current state of AI development.

  • He described the industry as stuck in a “bad equilibrium” where competition forces companies to accelerate development despite safety concerns
  • Adler emphasized that no lab currently has a solution to AI alignment
  • His personal concerns extend to fundamental life decisions, questioning humanity’s future prospects

Expert perspectives: Leading voices in the AI community have echoed Adler’s concerns about the risks associated with rapid AI development.

  • UC Berkeley Professor Stuart Russell warned that the AGI race is heading toward a cliff edge, with potential extinction-level consequences
  • The contrast between researchers’ concerns and industry leaders’ optimism is stark, with OpenAI CEO Sam Altman recently celebrating new ventures like Stargate

Recent developments: OpenAI continues to expand its offerings and partnerships despite internal safety concerns.

  • The company has launched ChatGPT Gov for U.S. government agencies
  • A new AI project called Stargate involves collaboration between OpenAI, SoftBank Group, and Oracle Corp.

Critical analysis: The growing divide between AI safety researchers and corporate leadership points to fundamental tensions in the industry’s approach to development. While companies push for rapid advancement and market dominance, those closest to the technology’s development are increasingly raising alarm bells about the potential consequences of unchecked progress. This disconnect may signal deeper structural issues in how AI development is governed and managed.

Latest OpenAI researcher to quit says he's "pretty terrified"

Recent News

Two-way street: AI etiquette emerges as machines learn from human manners

Users increasingly rely on social niceties with AI assistants, reflecting our tendency to humanize technology despite knowing it lacks consciousness.

AI-driven FOMO stalls purchase decisions for smartphone consumers

Current AI smartphone features provide limited practical value for many users, especially retirees and those outside tech-focused professions, leaving consumers uncertain whether to upgrade functioning older devices.

Copilot, indeed: AI adoption soars in aerospace industry

Advanced AI systems now enhance aircraft design, automate navigation, and predict maintenance issues, transforming operations across the heavily regulated aerospace sector.