back
Researchers discover a shortcoming that makes LLMs less reliable
Get SIGNAL/NOISE in your inbox daily
MIT researchers find large language models sometimes mistakenly link grammatical sequences to specific topics, then rely on these learned patterns when answering queries. This can cause LLMs to fail on new tasks and could be exploited by adversarial agents to trick an LLM into generating harmful content.
Recent Stories
Jan 14, 2026
Corporate legal departments are cutting costs with AI
Corporate legal teams are becoming eager adopters of AI tools that cut tasks from days to minutes.
Jan 14, 2026Nvidia Gets U.S. Approval to Ship AI Chips to China. Now It Waits on Beijing.
Nvidia stock was reacting to news the Trump administration had finalized the requirements for the chip maker to sell its H200 chips in China.
Jan 14, 2026