back

Benchmarks Are Memes: How What We Measure Shapes AI—and Us

Benchmarks warp AI research: should we care?

In the fast-paced world of AI development, researchers often chase performance metrics that don't necessarily translate to real-world utility. This tension between measurable progress and actual value sits at the heart of Alex Duffy's thought-provoking presentation on AI benchmarks. As the race for artificial general intelligence accelerates, Duffy challenges us to reconsider what we're measuring and why it matters for the technologies that increasingly shape our world.

  • Benchmarks function as memes – they replicate, spread, and shape research behavior through competitive dynamics, potentially distorting progress toward genuinely useful AI
  • Goodhart's Law dominates AI research – when a measure becomes a target, it ceases to be a good measure, leading to optimization for the benchmark rather than real capabilities
  • Current benchmarks favor prediction over reasoning – they reward models that can predict the next token in existing human-generated content, not necessarily models that can think or reason
  • Multimodal capabilities are becoming the new frontier – as AI expands beyond text to include vision, audio and other modalities, our benchmarking approach needs to evolve

The benchmark paradox we can't escape

The most compelling insight from Duffy's presentation is how benchmarks create self-reinforcing feedback loops that shape not just AI development but also our conception of intelligence itself. When we decide that solving a specific puzzle or answering certain questions constitutes "intelligence," we begin optimizing our systems toward those narrow goals. The result? Technologies that excel at specific tasks without necessarily advancing toward the general capabilities we actually desire.

This matters tremendously because billions of dollars and countless research hours flow toward improving performance on these metrics. As language models reach human-level performance on tests like MMLU or TruthfulQA, we must ask whether we're actually building more capable, aligned AI or simply constructing sophisticated pattern-matching systems that game our evaluation methods.

What the presentation missed: the social dimension

While Duffy expertly dissects the technical challenges of benchmarking, there's an important social dimension to consider. Academic and industry research communities are tightly bound by publishing expectations and funding requirements that demand quantitative progress. A research lab can't easily secure additional funding by saying, "We've been thinking deeply

Recent Videos

May 6, 2026

Hermes Agent Master Class

https://www.youtube.com/watch?v=R3YOGfTBcQg Welcome to the Hermes Agent Master Class — an 11-episode series taking you from zero to fully leveraging every feature of Nous Research's open-source agent. In this first episode, we install Hermes from scratch on a brand new machine with no prior skills or memory, walk through full configuration with OpenRouter, tour the most important CLI and slash commands, and run our first real task: a competitor research report on a custom children's book AI business idea. Every future episode will build on this fresh install so you can see the compounding value of the agent in real time....

Apr 29, 2026

Andrej Karpathy – Outsource your thinking, but you can’t outsource your understanding

https://www.youtube.com/watch?v=96jN2OCOfLs Here's what Andrej Karpathy just figured out that everyone else is still dancing around: we're not in an era of "better models." We're in a different era of computing altogether. And the difference between understanding that and not understanding it is the difference between being a vibe coder and being an agentic engineer. Last October, Karpathy had a realization. AI didn't stop being ChatGPT-adjacent. It fundamentally shifted. Agentic coherent workflows started to actually work. And he's spent the last three months living in side projects, VB coding, exploring what's actually possible. What he found is a framework that explains...

Mar 30, 2026

Andrej Karpathy on the Decade of Agents, the Limits of RL, and Why Education Is His Next Mission

A summary of key takeaways from Andrej Karpathy's conversation with Dwarkesh Patel In a wide-ranging conversation with Dwarkesh Patel, Andrej Karpathy — former head of AI at Tesla, founding member of OpenAI, and creator of some of the most popular AI educational content on the internet — shared his views on where AI is headed, what's still broken, and why he's now pouring his energy into education. Here are the key takeaways. "It's the Decade of Agents, Not the Year of Agents" Karpathy's now-famous quote is a direct pushback on industry hype. Early agents like Claude Code and Codex are...