back
6 reasons why “alignment-is-hard” discourse seems alien to human intuitions, and vice-versa
Get SIGNAL/NOISE in your inbox daily
TL;DR: AI alignment has a culture clash. On one side, the “technical-alignment-is-hard” / “rational agents” school-of-thought argues that we should expect future powerful AIs to be power-seeking ruthless consequentialists. On the other side, people observe that both humans and LLMs are obviously capable of behaving like, well, not that. The latter group accuses the former of head-in-the-clouds abstract theorizing gone off the rails, while the former accuses the latter of mindlessly assuming that the future will always be the same as the present, rather than trying to understand things …
Recent Stories
Jan 13, 2026
A consumer watchdog issued a warning about Google’s AI agent shopping protocol — Google says she’s wrong
A consumer economics watchdog says Google's new Universal Commerce Protocol is ripe for misuse where consumers could pay more for items. Google denies this.
Jan 13, 2026What’s the deal with Physical AI? Why the next frontier of tech is already all around you
I spoke with Qualcomm at CES to learn more about what the buzzword means, how it applies to you, and what a physical AI future might look like.
Jan 13, 2026Google Flips OpenAI’s Shopping Strategy on Its Head
Google’s latest AI pitch to retailers is simple: Google isn’t planning to take a cut of purchases made through its Gemini AI chatbot and search results. Instead, it plans to make money from its AI shopping push by selling a new type of ads to retailers. That approach is opposite that of OpenAI, ...