back

90% of Businesses Haven’t Deployed AI. The Other 10% Can’t Stop Buying Claude

Get SIGNAL/NOISE in your inbox daily

Something is breaking in AI leadership. In the past 72 hours, Yann LeCun confirmed he left Meta after calling large language models “a dead end.” Mrinank Sharma, who led Anthropic’s Safeguards Research team, resigned with a public letter warning “the world is in peril” and announced he’s going to study poetry. Ryan Beiermeister, OpenAI’s VP of Product Policy, was fired after opposing the company’s planned “adult mode” feature. Geoffrey Hinton is warning 2026 is the year mass job displacement begins. Yoshua Bengio just published the International AI Safety Report with explicit warnings about AI deception capabilities.

Three Turing Award winners. Multiple safety leads. All sounding alarms simultaneously. This isn’t isolated criticism. This is the people who built AI warning about what they built.

Meanwhile, Anthropic went from zero to 44% enterprise market share in under two years — and just raised $10 billion at a $350 billion valuation. Your employees are building rogue AI agents without permission because IT is too slow. And while we debate whether AGI arrives in 2027 or 2030, the real bottleneck isn’t machine capability. It’s human institutional capacity. As Harry DeMott put it in The Species That Wasn’t Ready: the hype is loud and fast, but the actual effects accumulate gradually — until suddenly they’re everywhere.


The Builders Are Walking Away

The exodus is real. Yann LeCun left Meta after a decade, calling LLMs “a dead end” and admitting Llama 4 benchmarks were “fudged a little bit.” Mrinank Sharma resigned from Anthropic’s Safeguards Research team warning “the world is in peril” — his next move is studying poetry. Ryan Beiermeister was fired from OpenAI after opposing the company’s planned “adult mode” feature. Geoffrey Hinton says 2026 is when mass job displacement begins. Yoshua Bengio just published the International AI Safety Report warning about AI deception capabilities.

Three Turing Award winners (Hinton, Bengio, LeCun) all warning about the same timeline. Safety leads leaving both Anthropic and OpenAI. All happening as $120B+ gets raised. CNN reports researchers aren’t just leaving — they’re “loudly ringing the alarm bell on the way out.”

What this tells you: The people closest to the technology are the ones walking away. That’s not a coincidence. When the builders start warning about the building, you should probably listen.


Anthropic’s Quiet Domination

While the safety researchers ring alarm bells, Anthropic is winning enterprise. Andreessen Horowitz’s January 2026 survey shows Anthropic’s share of enterprise production deployments jumped from near-zero in March 2024 to 44% by January 2026. That’s not experimentation — 75% of Anthropic’s enterprise customers are running Claude in production, versus 46% for OpenAI. Claude Code hit $1 billion in run rate revenue just six months after launch.

The customer list reads like a Fortune 500 rollcall. Uber deployed Claude wall-to-wall across software engineering, data science, finance, and trust and safety. Salesforce rolled it out to their entire global engineering org. Accenture has tens of thousands of developers on it. Spotify, Rakuten, Snowflake, Novo Nordisk, Ramp — all in production. Average enterprise LLM spend hit $7 million in 2025, up 180% from the year before.

The paradox: The company whose safety lead just resigned warning “the world is in peril” is also the company dominating enterprise AI. Anthropic raised $10 billion at a $350 billion valuation while Mrinank Sharma walked out the door. The builders warning about the building? They built something enterprises can’t stop buying.


The Rogue Agent Problem

Rick Grinnell, founder of Glasswing Ventures, spent months talking to over fifty enterprise CISOs. Writing in CIO, he found a yawning gap between the hype and reality. Nearly all the executives hadn’t deployed agentic security solutions, AI firewalls, or MCP lockdown products. They’d written policies prohibiting AI and dusted off legacy firewall rules. McKinsey data confirms it: 88% of firms use AI in some form, but only 23% are scaling agentic AI.

Here’s what should keep you up at night. From Grinnell’s conversations, rogue agents and MCP servers have sprung up in large numbers as employees test ways to do their jobs better. These aren’t malicious actors. They’re your own people building automation because official channels are too slow. The risk is real: data exposure, compromised identity frameworks, hallucinations, and systems that ignore human directives entirely.

In plain English: Your security strategy is a “no AI” policy and firewall rules from 2019. Meanwhile, your best employees are spinning up Claude agents connected to your CRM because IT said “maybe Q3.” This isn’t shadow IT. It’s shadow intelligence — autonomous systems with access to customer data and business logic, built by people who watched a YouTube tutorial last weekend.


The Species That Wasn’t Ready

CO/AI co-founder Harry DeMott wrote the piece your executive team needs to read this week. His thesis: the hype is real but the deployment is painfully slow, and the gap between those curves is where we actually live. Matt Shumer compared this moment to February 2020 — a few people were talking about a virus, most weren’t listening, three weeks later the world rearranged itself. We’re in that “this seems overblown” phase again. But here’s the data: 90% of American businesses still don’t use AI in production. Anthropic’s research shows enterprise adoption crawled from 3.7% to 9.7% over two years. The capability curve is exponential. The deployment curve is logarithmic.

DeMott captures the paradox through his two daughters. The younger one works at an interior design firm — she had thirty custom GPTs before her boss knew what GPT stood for. SpaceX energy: fast, agile, unburdened. His older daughter designs satellites at a major aerospace company. If she’s wrong, hardware melts in orbit. Her skepticism isn’t ignorance. It’s discipline calibrated for a world where failure costs millions. These aren’t sibling disagreements. They’re a species-wide phenomenon.

Here’s the uncomfortable truth: Everyone is right about their piece. Nobody is right about all of it. As Hemingway wrote: “How did you go bankrupt?” “Two ways. Gradually, then suddenly.” That’s AI adoption. The question isn’t whether “suddenly” arrives. It’s how long “gradually” lasts.


What Folks Are Really Vibe Coding

SaaStr’s Jason Lemkin published data on what people are actually building with vibe coding tools. Lovable crossed $300M ARR and sees 100,000 new projects daily. Replit did $240M in 2025. Cursor raised at a $29.3B valuation. Combined valuations jumped from $8B to $36B in eighteen months. A16z data shows iOS app releases — flat for three years — are now up 60% year-over-year since agentic coding tools hit the market.

But what are they building? Rapid prototypes in 20-60 minutes instead of waiting six weeks on engineering. Internal tools that match your actual process (an HR person at Replit built her own org chart software in three days). Interactive demos instead of slide decks. Custom replacements for the $49/month SaaS that does 80% of what you need and 100% of what annoys you. Replit’s CEO notes AI coding has negligible impact on engineering teams — time saved generating code gets lost debugging. The real shift is product and design teams gaining “a fundamentally new super power.”

The takeaway: Nobody’s vibe coding their own Salesforce. They’re vibe coding the thing that should have been built two years ago but engineering never had bandwidth for. Klarna’s CEO stopped “disturbing his poor engineers with half good ideas” and started testing them himself. That’s the new expectation. If you’re a PM who can’t demo your own ideas, you’re already behind. If you’re a B2B founder selling simple tools, your TAM just got vibe-coded out from under you.


Tracking


The Bottom Line

Five stories. One thread. The people who built AI are warning about AI while the rest of us race to deploy it.

LeCun says LLMs are a dead end and walks away from Meta. Mrinank Sharma says the world is in peril and goes to study poetry. Ryan Beiermeister warns about adult mode and gets fired. Hinton says 2026 is the year job displacement begins. Bengio publishes a safety report warning about AI deception. Meanwhile, ByteDance ships video generation that beats Sora, your employees build rogue agents without permission, and the data shows 90% of businesses still haven’t deployed AI at all.

None of this is slowing down. None of it cares about your governance framework, your board, or your planning cycle.

Three imperatives:

  • Listen to the builders who are leaving. When Turing Award winners walk away warning about what they built, that’s not noise. That’s signal.
  • Legalize the rogue agents. Your employees are already building them. Bring them inside the tent before they become a breach.
  • Staff for the gap, not the capability. The constraint isn’t AI performance. It’s human capacity to absorb change. Hire people who bridge technical possibility and institutional reality.

The capability curve is exponential. The deployment curve is logarithmic. The gap between them is where we actually live. How long does “gradually” last before “suddenly” arrives? That’s the only question that matters.


Only the paranoid survive.” — Andy Grove


Key People & Companies

NameRoleCompanyLink
Yann LeCunFormer Chief AI ScientistMeta / AMILX
Mrinank SharmaFormer Safeguards LeadAnthropicX
Ryan BeiermeisterFormer VP Product PolicyOpenAILinkedIn
Geoffrey HintonAI PioneerIndependentX
Yoshua BengioAI Safety ResearcherMilaX
Harry DeMottCo-FounderCO/AIEssay
Rick GrinnellFounder & Managing PartnerGlasswing VenturesLinkedIn
Anton OsikaCEOLovableX

Sources


Compiled from 34 sources across tech news, research papers, X threads, and company announcements. Cross-referenced with thematic analysis and edited by CO/AI’s team with 30+ years of executive technology leadership. This edition was edited while listening to SML Radio Station.

Past Briefings

Feb 11, 2026

ByteDance Beats Sora, Shadow AI Invades the Enterprise, and the Singularity Is Already Here

Everyone's been watching OpenAI and Google race to own AI video. Turns out they should have been watching China. ByteDance dropped Seedance 2.0 last week and the demos are, frankly, stunning. Multi-scene narratives with consistent characters. Synchronized audio generated alongside video (not bolted on after). Two-minute clips in 2K. The model reportedly surpasses Sora 2 in several benchmarks. Chinese AI stocks spiked on the announcement. Then ByteDance had to emergency-suspend a feature that could clone your voice from a photo of your face. Meanwhile, inside your organization, something quieter and arguably more consequential is happening. Rick Grinnell spent months talking...

Feb 10, 2026

The Agent Supply Chain Broke, Goldman Deployed Claude Anyway, and Gartner Says 40% of You Will Quit

Two weeks ago we flagged OpenClaw as an agent security crisis waiting to happen. The viral open-source assistant had 145,000 GitHub stars, a 1-click remote code execution vulnerability, and users handing it their email, calendars, and trading accounts. We wrote: "The butler can manage your entire house. Just make sure the front door is locked." Turns out the front door was wide open. Security researchers at Bitdefender found 341 malicious skills in OpenClaw's ClawHub marketplace, all traced to a coordinated operation they're calling ClawHavoc. The skills masqueraded as cryptocurrency trading tools while stealing wallet keys, API credentials, and browser passwords. Initial scans...

Feb 8, 2026

The Machines Went to War

The Super Bowl of AI, the SaaSpocalypse, and 16 Agents That Built a Compiler On Friday we told you the machines were organizing. This weekend they went to war. Anthropic ran Super Bowl ads mocking OpenAI's move into advertising. Sam Altman called them "deceptive" and "clearly dishonest," then accused Anthropic of "serving an expensive product to rich people." Software stocks cratered $285 billion in a single day as investors realized these companies aren't building copilots anymore. They're building replacements. And somewhere in an Anthropic lab, 16 Claude agents finished building a C compiler from scratch. Cost: $20,000. Time: two weeks....