×
AI discussions evolve: 10+ year veterans share insights
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

The evolution of AI discussions on LessWrong reflects the dramatic acceleration of artificial intelligence capabilities in recent years. As generative AI has moved from theoretical concept to everyday reality, the community’s concerns, predictions, and areas of focus have naturally shifted to address emerging challenges and revelations. This retrospective inquiry seeks to understand how perspectives on AI alignment, development difficulty, and key concepts have evolved within one of the internet’s pioneering AI safety communities.

The big picture: A LessWrong community member is seeking insights from long-term participants about how AI discussions have evolved over the past decade, particularly contrasting pre-ChatGPT era thinking with current perspectives.

  • The inquiry specifically targets members who participated in LessWrong discussions 10+ years ago to track shifts in both individual and community thinking on AI alignment, development, and key concepts.
  • This retrospective analysis comes at a pivotal moment when theoretical AI concerns from the past are now being tested against rapidly emerging capabilities and real-world implementations.

Key questions posed: The inquiry focuses on six specific areas where opinions may have shifted over the past decade.

  • The questions probe changes in perspectives on alignment difficulty, community consensus, the relevance of older concepts, abandoned discussion topics, views on AGI development difficulty, and surprising developments.
  • The inquirer specifically asks which concepts from earlier discussions (like “pivotal act” and CEV) remain relevant and which have become obsolete.

Personal surprise noted: The inquirer expresses particular surprise about the counterintuitive capabilities and limitations of modern language models.

  • They highlight the paradox that current LLMs can write complex code yet struggle with basic counting tasks, citing an example where Gemini 2.5 Pro produced a text with 401 words when asked for precisely 269 words.
  • This observation underscores the unexpected and sometimes contradictory nature of AI progress, where advanced capabilities can emerge before more seemingly fundamental ones are mastered.

Why this matters: Tracking the evolution of expert thinking on AI safety and development provides valuable context for understanding current challenges and potential solutions.

  • Historical perspective on AI discussions can reveal which concerns proved prescient, which were misplaced, and how the community’s understanding has matured with actual technological developments.
  • Identifying shifting perspectives among those who have thought deeply about AI for over a decade may reveal important insights about what we still don’t understand about artificial intelligence development.
Questions for old LW members: how have discussions about AI changed compared to 10+ years ago?

Recent News

Scaling generative AI 4 ways from experiments to production

Organizations face significant hurdles when moving generative AI initiatives from experimentation to production-ready systems, with most falling short of deployment goals despite executive interest.

Google expands Gemini AI with 2 new plans, leak reveals

Google prepares to introduce multiple subscription tiers for Gemini, addressing the gap between its free and premium AI offerings.

AI discovers potential Alzheimer’s cause and treatment

AI identifies PHGDH gene as a direct cause of Alzheimer's disease beyond its role as a biomarker, offering a new understanding of spontaneous cases and potential treatment pathways.