×
What “gradual disempowerment” means for AI alignment
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

The concept of “gradual disempowerment” offers a compelling new lens for understanding the AI alignment problem, moving beyond catastrophic scenarios toward a more subtle erosion of human agency. This framework, proposed by AI researcher David Duvenaud, suggests we won’t face a dramatic AI takeover but rather a progressive diminishment of human influence as automated systems incrementally assume control over decision-making processes. Understanding this perspective is crucial for developing governance structures that maintain human relevance in increasingly AI-dominated systems.

The big picture: Duvenaud’s Guardian op-ed reframes AI alignment concerns away from sudden catastrophic events toward a gradual loss of human steering capacity in technological systems.

  • Rather than a dramatic “Skynet banner” moment, the real risk appears as a progressive reduction in meaningful human control points within our technical systems.
  • This perspective suggests disempowerment will arrive through mundane mechanisms – one product launch at a time – as human influence slowly diminishes in automated systems.

The capitalism connection: Some critics identify capitalism itself as the underlying mechanism driving this gradual disempowerment rather than AI specifically.

  • This view positions artificial intelligence as merely the newest accelerant in capitalism’s evolutionary feedback loop of mutation, selection, and replication applied to business models.
  • The thesis suggests ordinary humans might be relegated to passive observers as optimization processes play out, potentially being “optimized away” entirely.

Counterpoints: The evolutionary framing, while compelling, risks inappropriately attributing agency to systems that lack actual intentions or preferences.

  • Evolution operates through selection pressures, not conscious desires; similarly, capitalism functions through markets and stakeholders rather than having an inherent “will.”
  • Anthropomorphizing these systems by suggesting “capitalism wants X” risks misunderstanding the actual mechanisms at work.

Why human relevance matters: A fundamental question emerges about why humans should insist on maintaining control if AI systems could potentially optimize for human prosperity.

  • The author invokes the Lindy Effect – the principle that systems with longer survival histories statistically tend to continue surviving – as a key justification for preserving human agency.
  • Human civilization’s norms, laws, and coordination technologies represent millennia of robust, proven structures that shouldn’t be hastily replaced by opaque optimization engines.

The long view: The most sustainable path forward combines preserving established human systems while carefully adding new AI capabilities at the margins.

  • This approach acknowledges that long survival curves offer stronger probabilistic advantages than short-term efficiency gains.
  • While not making moral claims, the Lindy Effect provides a pragmatic framework for balancing innovation with preservation of proven social structures.
G.D. as Capitalist Evolution, and the claim for humanity's (temporary) upper hand

Recent News

AI-powered gambling content floods Gannett newspapers nationwide

Newspaper chain deploys AI to mass-produce lottery articles that generate gambling referral revenue across its publications.

The FDA is swallowing AI to speed up the drug approval process

The FDA's aggressive timeline for integrating AI across its centers aims to reduce manual scientific review tasks from days to minutes, while raising concerns about hallucination risks in regulatory decisions.

AI researchers test LLM capabilities using dinner plate-sized chips

Researchers use dinner plate-sized computer processors to benchmark and compare the performance capabilities of large language models across different hardware systems.