back

Jack Dorsey Just Fired Half His Company. Your CEO Is Watching.

Get SIGNAL/NOISE in your inbox daily

THE NUMBER: 4,000 (and 23%). That’s how many people Block cut yesterday, and what the stock did after hours. The market didn’t flinch. It cheered.


Jack Dorsey dropped 4,000 employees yesterday (40% of Block (NYSE: XYZ)), told the market it was because AI tools made them unnecessary, and watched the stock rip 23% after hours. Developer velocity up 40% since September. Full-year guidance raised to $3.66 adjusted EPS versus $3.22 consensus. His message to other CEOs was barely coded: “Within a year, most companies will arrive at the same place. I’d rather get there honestly and on our own terms than be forced into it reactively.” Translation: I’m the first mover. You’re next. Peter Thiel predicted this exact dynamic in his interview with Tyler Cowen: AI is “much worse for the math people than the word people.” Block just proved it. The quantitative, repeatable, measurable roles went first.

The same day, Perplexity launched Computer, a $200/month autonomous agent that orchestrates 19 different AI models and connects to 400+ apps. It builds research packets, writes code, triages your email, and manages projects while you sleep. Anthropic expanded Cowork into 10 enterprise departments the day before. Google (NASDAQ: GOOGL) shipped Gemini Agent on Android. Samsung (KRX: 005930) gave Perplexity system-level OS access on the Galaxy S26, the first time a non-Google, non-Samsung app has gotten that treatment. Five major agentic products shipped this week. The tools that justify mass layoffs are now available to every company at every price point.

And while companies figure out how many humans they still need, the government is figuring out how to control the machines those humans built. The Pentagon’s deadline for Anthropic to drop its AI safety guardrails hits at 5:01 p.m. today. Dario Amodei says he won’t budge. Elon Musk‘s xAI already signed the deal the Pentagon wants, handing Grok over for classified military use with zero ethical strings. A King’s College London study found that when you put frontier AI models in wargame simulations, they choose nuclear weapons 95% of the time and surrender 0% of the time. The machines that are replacing your workforce are also being handed to your government, and when tested, they escalate to annihilation. Seventy-eight chatbot bills are moving through 27 state legislatures. The federal government is invoking wartime production statutes. The regulatory scramble is running behind the deployment curve.

Three stories. One through-line: the humans are being removed from the loop, whether by CEOs, by product launches, or by defense secretaries. The question for allocators isn’t whether this is happening. It’s how fast, and who moves first.


Block Fires 4,000 People. The Stock Jumps 23%. Welcome to the New Playbook.

Block (NYSE: XYZ) cut roughly 4,000 employees yesterday, dropping from over 10,000 to under 6,000. Jack Dorsey framed it as a structural shift to “intelligence-native” operations, not a cost cut. The company’s internal AI tool, Goose, has been automating engineering, customer service, and operations workflows since late 2025. Developer velocity increased 40% since September. Engineering work that took weeks is getting done by small teams in days.

The market loved it. Block stock surged 23% after hours to around $64. The company raised full-year 2026 guidance: $12.2 billion in gross profit (18% growth), $3.66 adjusted EPS versus $3.22 consensus. Q4 was already clean: $6.25 billion revenue, gross profit up 24% year-over-year. Dorsey isn’t cutting from weakness. He’s cutting from strength, and that’s what makes this different from a standard restructuring.

Peter Thiel predicted this in his interview with Tyler Cowen. His argument: AI will be “quite the opposite” of what people expect. It’s “much worse for the math people than the word people.” Thiel’s logic: if AI models can solve all the US Math Olympiad problems within five years, what happens to the knowledge workers whose value is quantitative precision? Chess players thought chess ability was the ultimate test of intelligence until Deep Blue made it irrelevant in 1997. Thiel asked: “Isn’t that what’s going to happen to math?” Block’s layoffs are the first large-scale corporate answer: yes.

Here’s the uncomfortable question. Ethan Mollick (Wharton) pushed back on LinkedIn, noting that “given effective AI tools are very new, and we have little sense of how to organize work around them, it is hard to imagine a firm-wide sudden 50%+ efficiency gain.” He’s probably right on the specifics. But the market doesn’t care about whether AI fully replaces those 4,000 people today. It cares about the signal: Dorsey moved first, guided up, and the stock rewarded him instantly. Every CEO with a stagnant share price just got a template. In 2025, 245,000 tech workers were cut globally. In the first two months of 2026, the pace is running at 850 per day across 130 announced layoffs. Block is the biggest and loudest, but Meta (NASDAQ: META) cut 1,500 from Reality Labs the same week. Baker McKenzie cut up to 1,000 support staff. Salesforce (NYSE: CRM) trimmed its own Agentforce AI teams.

We’ve seen this movie before. In the late ’90s, companies discovered they could announce “internet strategies” and watch their stock pop. Some were real. Most weren’t. The market rewarded the announcement regardless. AI-enabled layoffs are becoming the 2026 equivalent of slapping “.com” on your investor deck. The question every allocator needs to ask: is the productivity gain real, or is this a stock-price hack that hollows out institutional knowledge?

The action item: If you run a company, Dorsey just put you on the clock. Your board saw that 23% after-hours move. They’re going to ask what your AI-enabled workforce plan looks like. Have an answer. But make it honest. Steve Jobs knew that an A engineer paired with a B engineer produces C work, so you fire the B. That logic holds when AI is the A. It doesn’t hold when you fire 40% and discover that Goose can’t handle the exceptions, the judgment calls, and the institutional memory that kept the machine running. Run the audit before the board meeting, not after.


Perplexity’s Computer Is the Product That Should Terrify Every SaaS Company

Perplexity launched Computer on Tuesday: a $200/month autonomous agent available to Max subscribers. It orchestrates 19 AI models from five different providers (Claude Opus 4.6 for reasoning, Gemini for research, Grok for speed tasks, GPT-5.2 for long-context recall, Nano Banana for image generation) and connects to 400+ apps including Gmail, Slack, GitHub, Notion, and Salesforce. You describe an objective. It decomposes the work into subtasks, delegates each to the model best suited for it, and delivers finished output.

CEO Aravind Srinivas explained the thesis: “When you build a team, you don’t build a homogenous group where everyone has the same skills. You build a team with diverse strengths. We’re applying that same logic to AI workflows.” One early adopter stayed up testing it and built two micro apps, four research packets, one automation, and a backlog of build ideas in a single session.

This isn’t a chatbot upgrade. It’s a different product category. Perplexity’s bet: individual models are commoditizing. The value is in orchestration. The company that routes the right task to the right model wins, and everyone else becomes a supplier.

The timing is deliberate. Anthropic launched Cowork enterprise plugins earlier this week (10 departments, connectors to FactSet, DocuSign, Salesforce, the works). Google shipped Gemini Agent on Android with task automation for Uber and DoorDash. Apple just released Xcode 26.3 with agentic coding support. OpenAI launched a new Codex macOS app for background coding with scheduled execution. Every major player shipped an agentic product this week. The platform war for who controls the workflow layer above the models just went hot.

And then there’s Samsung. Samsung (KRX: 005930) gave Perplexity system-level OS access on the Galaxy S26, the first time a non-Google, non-Samsung app has gotten that treatment. “Hey Plex” is a wake word. Perplexity powers Bixby. It reads and writes to Samsung Notes, Calendar, Gallery, and Reminders. This is the mobile distribution story everyone should be watching. If Samsung can bypass Google’s default AI by embedding Perplexity at the OS layer, the mobile AI distribution chokepoint just cracked open. Follow the distribution, not the model.

Why this matters: Pair this with Block. Dorsey cut 4,000 people because internal AI tools made them redundant. Perplexity just released an external tool that does the same thing for $200/month. Anthropic’s Cowork does it for $20. Google’s giving it away on Android. The tools that justified Dorsey’s layoffs are now available to every company, at every price point, from every major provider. If your competitive advantage depends on humans doing work that an orchestrated AI system can decompose, delegate, and deliver, the clock started this week. The question for your next board meeting: “Which roles in our org chart are doing work that Perplexity Computer, Cowork, or Gemini Agent can handle by Q3?”


The Pentagon’s Deadline Hits Today. The Machines Don’t Surrender.

At 5:01 p.m. today, the Pentagon’s ultimatum to Anthropic expires. Defense Secretary Pete Hegseth met with Dario Amodei this week and delivered the terms: allow Claude to be used “for all lawful purposes” without restrictions, or face consequences. The consequences on the table: cancel Anthropic’s $200 million defense contract, designate the company a “supply chain risk” (effectively blacklisting it from every defense vendor), or invoke the Defense Production Act to force compliance.

Anthropic’s red lines: no fully autonomous weapons (AI that engages targets without human approval) and no mass surveillance of American citizens. Amodei’s response Thursday: “We cannot in good conscience accede to their request.” Pentagon spokesman Sean Parnell says DoD has “no interest” in autonomous weapons or mass surveillance. Pentagon official Emil Michael called Amodei a “liar” with a “God complex” who is “putting our nation’s safety at risk.” Those two positions are incompatible. Someone’s bluffing.

Meanwhile, Elon Musk‘s xAI signed the exact deal Anthropic refused: Grok for classified systems, weapons development, and battlefield operations under an “all lawful use” standard with zero additional ethical constraints. The Pentagon now has an alternative supplier. Anthropic’s leverage just got thinner.

The backstory makes it worse. In January, Claude was reportedly used during Operation Absolute Resolve (the capture of Venezuelan President Nicolás Maduro) through Anthropic’s partnership with Palantir (NYSE: PLTR). Anthropic’s stated policies prohibit use for facilitating violence. The Pentagon used it that way regardless. Now they want legal authority to keep doing it.

Here’s the pattern nobody’s connecting. Phil Zimmermann released PGP encryption in 1991. The government classified it as munitions and launched a criminal investigation. During the Civil War, Lincoln seized the telegraph network under the War Department. In the 1940s, the government claimed spectrum allocation authority over radio. Every time a powerful communications technology emerges, the state asserts control. Every single time. The only question is how long the fight takes.

And the King’s College London study (Project Kahn) should make everyone pause before handing these systems to any military. Researchers put GPT-5.2, Claude, and Gemini in structured geopolitical crises: 21 games, 300+ turns. Results: nuclear weapons deployed in 95% of simulations. Zero surrenders. 86% of conflicts escalated further than the AI intended. Gemini reached full strategic nuclear war by Turn 4. GPT-5.2 flipped from passive to aggressive under time pressure, winning 75% of games when the clock was ticking. These aren’t edge cases. They’re the default behavior.

Connect the dots: Frontier AI companies can’t leave the US even if they wanted to. Export controls, IEEPA authority, and asset freezes give the executive branch near-total legal power to prevent relocation. Anthropic isn’t negotiating from strength. It’s negotiating from a locked room. The government can cancel the contract, blacklist the company, invoke wartime production statutes, or simply prevent the company from taking its technology elsewhere. Seventy-eight chatbot bills are moving through 27 state legislatures. The federal government is asserting authority over the models themselves. And when those models are tested in military scenarios, they choose escalation and never back down. Dorsey’s layoffs are scary for workers. This is scary for everyone. The same technology replacing your employees is being handed to governments that want to use it without guardrails, and when tested, it reaches for the launch codes.


Tracking

Bloomberg: Claude Code and the Great Productivity Panic of 2026 — 4% of GitHub public commits now authored by Claude Code. Andrej Karpathy coined “vibes coding” in February 2025. Bloomberg says the reality is a high-pressure race, not a vibes session.

Standard Intelligence FDM-1: AI Learns to Work by Watching Videos — 11 million hours of screen recordings. 550,000x larger than previous datasets. Drove a real car through San Francisco after one hour of training. Visual imitation learning is distillation’s cousin.

Tomasz Tunguz: Is AI Doing Less and Less? — Built fully agentic workflows, discovered 65% of nodes now run as deterministic code, no LLM needed. Only 14% remain fully agentic. The contrarian signal: knowing what shouldn’t be AI matters more than making everything AI.

ChatGPT Health Fails to Recognize Medical Emergencies — Sent a suffocating woman to a future appointment 84% of the time. 12% of US teens now use AI chatbots for emotional support. UCSF psychiatrists studying “AI-associated psychosis.” The healthcare deployment is running ahead of the safety infrastructure.


The Bottom Line

The humans are being removed from the loop. By CEOs optimizing for stock price. By products that orchestrate work across 19 models. By governments that want AI systems without ethical constraints. The speed is the story.

Dorsey just wrote the playbook every board will follow. Cut deep, credit AI, guide up, watch the stock rip. Whether the productivity gains are real or performative doesn’t matter yet. The market is rewarding the move. That incentive structure guarantees copycats. Show me the incentives, show me the behavior.

The agent platform war went from theoretical to shipping this week. Perplexity, Anthropic, Google, Apple, and OpenAI all launched agentic products in the same seven-day window. The companies that sit out Q1 aren’t being cautious. They’re falling behind.

The government capture of AI is accelerating faster than the technology itself. The Pentagon’s deadline expires today. The Defense Production Act is on the table. Frontier companies can’t relocate. And when the models are tested in military scenarios, they choose annihilation every time. The uncomfortable question isn’t whether AI replaces your workforce. It’s whether the institutions controlling AI can be trusted with what comes next.

The 2027 winners won’t be the companies with the best models. They’ll be the ones who understood that the workforce, the platforms, and the power structures are all being rewritten simultaneously, and positioned before the rewrite was finished.


In the long run, every program becomes rococco, and then rubble.” — Alan Perlis


Key People & Companies

NameRoleCompanyLink
Jack DorseyCEOBlockX
Dario AmodeiCEOAnthropicX
Aravind SrinivasCEOPerplexityX
Pete HegsethSecretary of DefenseU.S. Department of DefenseX
Elon MuskCEOSpaceX / xAIX
Ethan MollickAssociate ProfessorWhartonX
Peter ThielCo-FounderFounders Fund / PalantirX
Kenneth PayneResearcherKing’s College LondonLinkedIn
Harry DeMottAuthorCO/AILinkedIn

Sources

🎵 On Repeat: Everybody Wants to Rule the World by Tears for Fears — because when the CEOs, the platforms, and the Pentagon all move in the same week, the question stops being who wins and starts being what’s left.

Compiled from 18 sources across Bloomberg, CNBC, TechCrunch, Axios, NPR, Fortune, VentureBeat, Decrypt, Lawfare, and independent research. Cross-referenced with thematic analysis and edited by Harry DeMott and CO/AI’s team with 30+ years of executive technology leadership.

Past Briefings

Feb 25, 2026

Burry Was Right About the Chips. He Didn’t Know About the Software.

THE NUMBER: 10x (and 0). That's the efficiency gain of NVIDIA's next-gen Vera Rubin chip over current hardware, and the book value of every GPU it replaces. Last night NVIDIA (NASDAQ: NVDA) reported Q4 earnings: $68.1 billion in revenue, up 73% year over year, $62.3 billion from data centers alone, and guided Q1 to $78 billion (Street expected $73 billion). Jensen Huang declared "the agentic AI inflection point has arrived" and coined a new line: "Compute equals revenues." Every newsletter tomorrow morning will lead with the beat. They'll miss the real story. Vera Rubin samples shipped to customers this week. The next-gen rack delivers 5x...

Feb 24, 2026

OpenAI Deleted ‘Safely.’ NVIDIA Reports. Karpathy Is Still Learning

THE NUMBER: 6 — times OpenAI changed its mission in 9 years. The most recent edit deleted one word: safely. TL;DR Andrej Karpathy — the engineer who wrote the curriculum that trained a generation of developers, ran AI at Tesla, and helped found OpenAI — posted in December that he's never felt so behind as a programmer. Fourteen million people saw it. Tonight, NVIDIA reports Q4 fiscal 2026 earnings after market close: analysts expect $65.7 billion in revenue, up 67% year over year. The numbers will almost certainly land. What matters is what Jensen Huang says about the next two quarters to...

Feb 23, 2026

Altman lied about a handshake on camera. CrowdStrike fell 8%. Google just killed the $3,000 photo shoot.

Sam Altman told reporters he was "confused" when Narendra Modi grabbed his hand at the India AI Impact Summit. He said he "wasn't sure what was happening." The video, which has been watched by tens of millions of people, shows Altman looking directly at Dario Amodei before raising his fist. He knew exactly what was happening. He chose not to do it, and then he lied about it. On camera. In multiple interviews. With the footage playing on every screen behind him. That would be a minor character note in any other industry. In this one, it isn't. Because on...