Today's Briefing for Monday, March 30, 2026
Everyone’s arguing about who builds the best AI model. That’s the wrong race. The winner of the AI era will be whoever builds the best router.

THE NUMBER: 1.52 billion — the number of active iPhones in the world right now. One in four smartphones on Earth. A 92% user retention rate. Nearly 70% of all global consumer app spending. And as of last week, every single one of them is about to become a switchboard for artificial intelligence. Apple doesn’t need to build the best model. It just needs to decide which model to call — and that decision, made 1.52 billion times over, is worth more than any model ever will be.
A few weeks ago, we published a piece called “Elon Musk Is a Router.” The thesis: Musk doesn’t run six companies. He runs one routing function — taking in information across SpaceX, Tesla, xAI, Neuralink, and the rest, and making real-time decisions about where to send resources, attention, and capital. Professional CEOs operate through abstraction layers — quarterly reports, summaries, committee readouts. Musk performs deep packet inspection. He sees the raw data and routes it himself. That’s not a management style. It’s a competitive advantage.
We didn’t realize at the time that we were describing the defining pattern of the entire AI era. But this week, the evidence showed up from every direction at once — and it all points to the same conclusion.
The value isn’t in the intelligence. It’s in the routing of intelligence.
🍎 The Biggest AI Story Nobody’s Talking About
On Wednesday, Bloomberg reported that iOS 27 will open Siri to third-party AI models through a new “Extensions” system. Claude, Gemini, ChatGPT, Grok — any AI chatbot downloaded from the App Store will be able to plug directly into Siri. Apple is also building its own chatbot, codenamed Campos, powered by Google’s Gemini.
Most coverage treated this as a product update. Siri gets smarter. Cool. Move on.
That’s missing the forest for a very large tree.
What Apple actually did is declare itself the operating system for intelligence at planetary scale. Think about what’s happening architecturally: a query comes in from any of 1.52 billion devices. Apple’s system decides whether to route it to Claude for reasoning, to Gemini for research, to a local model running on the A-series chip, or to its own Campos chatbot. The user never has to know. They just talk to Siri.
This isn’t even a new pattern. OpenAI already does this internally — routing queries across its own model family, sending simple questions to lighter models and complex reasoning tasks to heavier ones, all invisible to the user. Perplexity does the same thing across other companies’ models, choosing Claude for one task, Gemini for another, GPT-5.4 for a third. The routing layer already exists. Apple is just about to put it in the hands of every human who owns an iPhone.
Now pair that with what Princeton researchers published this month: specialist models are 10,000x more efficient than general-purpose reasoning models at their target tasks. You don’t need one god-model. You need the intelligence to pick the right small model for the right query. Apple just built the consumer-facing version of that insight — a routing layer that sits on top of every model and captures the relationship with the user.
And here’s the piece most people are sleeping on: privacy is the moat within the moat.
Apple has spent the better part of a decade positioning itself as the privacy company. “What happens on your iPhone stays on your iPhone” was a billboard campaign. App Tracking Transparency gutted Facebook’s ad business. Now think about what happens when AI handles increasingly sensitive queries — your medical questions, your financial decisions, your legal concerns, your relationship problems. The company that can say “this never left your device” has an advantage that no cloud-based model company can match. On-device inference isn’t just an efficiency play. It’s the natural extension of Apple’s most valuable brand promise.
@MilkRoadAI had the sharpest take on the architecture: if Apple succeeds at running pruned frontier models at 30-60 tokens per second on-device, everyday AI — rewriting, summarizing, basic reasoning — never touches a data center. Look at the chip architecture in the latest M-series MacBooks and the refreshed Mac Mini line: these machines are built for on-device inference. Your laptop becomes your personal AI server. Pair that with something like Tailscale — a mesh VPN that makes your devices accessible from anywhere — and suddenly your Mac at home is your edge compute node, running inference for your iPhone while you’re on the train. If Apple were smart, they’d buy or clone that capability yesterday. Your personal routing layer, your personal compute, your personal privacy. No cloud required for the everyday stuff.
That doesn’t just change the user experience. It undermines the entire capex thesis justifying $700 billion in hyperscaler spending this year. Why would Apple funnel queries to OpenAI’s cloud when it can run a specialist model locally, keep the data private, and cut the cloud providers out entirely?
As @birdabo put it in a post that hit 1.2 million views: “Apple just turned Siri into a wrapper. The most genius business move in AI this year.”
Wrapper understates it. Apple turned Siri into a toll booth that sits between 1.52 billion humans and every AI model on earth. With a 92% retention rate. And the App Store’s 30% cut on every AI subscription.
Perplexity already proved the routing thesis at a smaller scale. As Aakash Gupta documented in a thread that hit nearly a million views: Perplexity built zero AI models. Zero. It sits on top of 19 models by other companies — Claude for reasoning, Gemini for research, GPT-5.4 for long context, Grok for lightweight tasks. It has 400+ app connectors with read/write access. One prompt can scrape competitors, pull live financials, query data warehouses, and push reports to Google Slides. Perplexity’s valuation: $20 billion. For a routing layer.
Perplexity is Stripe for intelligence — it didn’t build banks, it made the complexity of moving money disappear. Apple is about to do the same thing for AI, except its “routing table” already has 1.52 billion entries in it.
And what about every iPhone user who doesn’t have an iPhone? They have an Android. And Android means Google. Between Apple and Google, you’re looking at essentially every consumer on earth. Google is arguably even better positioned because they’re playing both sides — building frontier models (Gemini) and routing infrastructure (Vertex AI) and running the world’s largest ad business to fund all of it indefinitely. If models commoditize, Google wins as a router. If one model dominates, Google has a shot at being that model. The Apple-Google duopoly isn’t just mobile operating systems anymore. It’s the two routing layers that sit between every human and every AI.
The uncomfortable question: if you’re building an AI model company without distribution, what exactly is your moat? Apple can slot your competitor into every iPhone on earth with a settings toggle. Google can do the same on Android. This is the browser wars, except the browser is in everyone’s pocket and the “search engine” is whichever AI model the duopoly decides to default. Google pays Apple roughly $20 billion a year for default search on Safari. Apple would happily take another $20 billion for default AI on Siri. Now imagine Anthropic, OpenAI, and Google all bidding for that slot. Show me the incentives and I’ll show you the behavior.
🏢 The Org Chart Was Always a Router — Just a Terrible One
The same pattern playing out at the consumer level is playing out inside every company that’s deploying AI. And it’s dismantling the org chart from the inside.
Last week, Zencoder published a case study that should be required reading for anyone running a software team. A product manager built and shipped a production feature in one day — no engineering involvement. A designer fixed visual UI drift by opening an agent, rather than filing a JIRA ticket. An engineer doubled throughput. The cycle time dropped from weeks to days to hours.
The bottleneck moved. It’s no longer engineering capacity. It’s decision velocity — the speed at which someone identifies what needs to happen and routes it to the right place. And increasingly, “the right place” is the person who had the idea in the first place.
Think about what a traditional org chart actually is. It’s a routing system. A query comes in — a bug, a feature request, a customer complaint — and it gets routed through layers. PM to engineering. Engineering to design. Design back to PM for review. Each handoff adds latency. Each handoff loses information. Each handoff exists because we assumed that specialized teams were the only way to get specialized work done.
That assumption just broke. The question used to be: which team is most likely to get the right answer, and which team is fastest to implement it? Those used to be two different teams, which is why the org chart had so many boxes and arrows. Increasingly, the answer to both questions is the same person — the one closest to the problem, armed with agents.
@samwoods captured this perfectly in a tweet that went around this weekend. An operator asked him to help build an AI agent that could “do what Sarah does.” Sarah does 15 different things across the business. Sam’s reframe changed everything: don’t ask “can AI replace Sarah?” Ask “which of Sarah’s 15 recurring processes can run end-to-end without human judgment at every step?” Instead of building one impossible agent, the operator built three simple ones that actually work. The principle: role replacement fails. Process replacement compounds.
That’s the org-chart version of Apple’s play. Apple doesn’t need one god-model — it needs to route to the right specialist model. Your company doesn’t need one superhuman employee — it needs to route each process to the right agent, the right tool, or the right person. The skill isn’t doing the work. The skill is knowing where to send it.
Gartner sees this becoming a $15 billion market by 2029 — up from less than $5 million today. Agent management platforms: the enterprise version of Apple’s routing layer. Identity, lifecycle, governance, context, orchestration. Google, Anthropic, Microsoft, Salesforce are all racing to own this layer because the agents themselves are commoditizing. The margin is in the orchestration. The margin is always in the routing.
🚪 The Adverse Selection Death Spiral
Here’s where it gets uncomfortable, and it’s the part nobody else will say.
The people who are naturally great at routing — who hold context across domains, who instinctively know where to send the query, who can operate without a hierarchy telling them what to do — those are exactly the people leaving big companies right now. Because the cost of starting something just went to zero.
@signulll posted a thread this weekend about the structural challenges big tech faces in hiring top AI talent. The argument: the best people are either founding their own companies or already misaligned with corporate incentives. What’s left is an available executive pool that skews toward people who came from other large companies — individuals who tend to struggle with uncertainty, fast pace, and resource constraints. They can’t function without a lot of people telling them what to do. They are, by definition, not routers. They are nodes that need routing. It’s an adverse selection loop. The router-shaped hole gets bigger, and the people who could fill it keep walking out the door.
This connects to a number we’ve been tracking. SaaStr now runs 25+ AI agents in production, shifted from 20+ humans to 3 humans plus agents, and posted 47% year-over-year revenue growth. Shopify requires employees to demonstrate why AI can’t do a job before requesting headcount. Salesforce cut customer support from 9,000 to 5,000. A Yale CEO survey shows two-thirds of CEOs expect to maintain or reduce headcount due to AI.
But Michael Girdley offered the counterpoint worth hearing: across his portfolio, AI isn’t killing jobs. Companies are getting better at satisfying customers more efficiently. Same thing that happened with virtually every tech innovation since 1900.
Both things are true. And the tension between them reveals the real story: the question isn’t whether AI replaces humans. It’s what kind of human becomes indispensable. The answer is the router — the person who can look at a problem and know instantly whether to send it to an agent, a specialist model, a human expert, or just handle it themselves. That person is starting companies. That person is joining fast-growing startups. That person is not sitting in a committee meeting waiting for the VP to decide which team gets the ticket.
And here’s the kicker — the thing we wrote about tacit knowledge this week suddenly connects. An AI system watching your company’s workflows doesn’t just capture tribal knowledge. It sees the exceptions. Who breaks the process and gets away with it? Where do they sit in the org chart? Are they the person who’s actually great at their job — a natural router who figured out a better path — or are they the boss nobody questions? Are they getting away with it because they’re genuinely good and nobody bothers them, or because they’re senior and nobody challenges them? That’s not knowledge capture. That’s an organizational MRI. And it reveals something most companies don’t want to see: the real routing logic has nothing to do with the org chart on the wall.
🔍 The Honest Pushback
We believe the routing thesis. But we don’t trust any thesis we haven’t tried to break. Here’s where it could be wrong.
What if models don’t commoditize? The entire argument assumes models become interchangeable and the routing layer captures the value. But what if one model becomes so much better that routing is irrelevant — everyone just wants that one? Google Search in 2004 didn’t need a “search router.” It was just better. If someone ships a model that is qualitatively, undeniably, orders-of-magnitude superior — not 10% better on benchmarks, but “this thing can run my company while I sleep” better — the routing thesis collapses. You don’t need a switchboard when there’s only one destination. The Claude Mythos leak suggests Anthropic may be attempting exactly this kind of breakaway. Watch whether users feel the difference or whether it just shows up in evals nobody reads.
Apple is historically bad at this. Let’s be honest. Siri has been a punchline for a decade. Apple Music trails Spotify. Apple Maps was a catastrophe at launch. Pages and Numbers are fine but nobody chooses them. The company that nailed hardware, retail, and services has consistently faceplanted on consumer software and AI. Distribution without execution is a wasting asset — ask Microsoft about Internet Explorer versus Google Search. Microsoft had 95% browser market share and still lost the search war because the product in the browser wasn’t good enough. Apple having 1.52 billion devices means nothing if Siri’s routing intelligence is bad — if it sends your coding question to Gemini when Claude would’ve been 10x better, users will just open the Claude app directly and bypass Siri entirely.
That said — Siri doesn’t need to be smart software. It needs to be a good concierge. It’s just an agent you talk to in iMessage, and iMessage still dominates consumer messaging in the US. Apple doesn’t need to build great software. It needs to route to great software. That’s a lower bar, and it’s a bar shaped exactly like the one thing Apple has always been good at: controlling the interface.
Enterprise doesn’t go through Apple. This is probably the strongest counterargument to the thesis. The consumer routing story is compelling. But enterprise — where the serious money is — buys directly from Anthropic, OpenAI, Google Cloud. No Fortune 500 CTO is routing their company’s AI strategy through Siri. The enterprise routing layer will be owned by whoever wins the agent management platform war, not the device manufacturer. If the majority of AI revenue concentrates in B2B (which it might), Apple’s consumer moat is valuable but not dominant.
The general-purpose model might just win. History often favors the general-purpose technology because it improves faster than the specialist. Mainframes lost to PCs. PCs lost to smartphones. Each time, people said “you need specialized hardware for serious work” and each time the general-purpose device got good enough. If frontier models keep improving while costs drop — Google’s TurboQuant already cut memory requirements by 6x — the efficiency advantage of specialist models could shrink to irrelevance. Models might incorporate their own internal routing too, calling out to specialist capabilities as needed, and the external routing layer becomes redundant.
Regulation could redraw the map. Apple’s 30% App Store toll is already under siege — EU Digital Markets Act, the Epic lawsuit precedent. If regulators force Apple to reduce fees or allow sideloading, the toll booth economics erode. That said, that’s what lobbyists are for — and Apple could make the toll a little smaller for something as important as AI. Everyone’s going to download AI regardless. A 15% cut on a billion transactions is still an extraordinary business.
We think the weight of evidence favors the routing thesis. But the honest position is: if one model breaks away from the pack in a way users genuinely feel, or if Apple’s execution on AI continues its decade-long mediocrity, or if the enterprise market proves more important than the consumer market — the thesis needs revision. We’ll tell you if it does.
What This Means For You
This is the week the routing thesis went from theory to evidence. The pattern is the same at every scale — consumer, enterprise, individual — and if you’re allocating capital, time, or human potential, you need to decide which side of the routing layer you’re on.
If you’re building an AI company, distribution is now existential. Apple’s Extensions system means every model company is one settings toggle away from being defaulted or delisted on 1.52 billion devices. Google can do the same on the other 5 billion. Perplexity proved a $20B company can be built on pure routing with zero proprietary models — but a router without distribution is a feature, not a company, and it’s one acquisition away from being absorbed by someone who has it. The moat isn’t your model. It’s your access to the end user. If you don’t have distribution, you’re a commodity supplier to someone who does.
If you’re running a company, audit your routing latency. How many handoffs does it take for a decision to go from “someone noticed a problem” to “someone fixed it”? Every handoff is latency. Every layer of approval is a slow router. The companies winning right now are the ones where the person with the intent is also the person with the tools. The PM ships the feature. The designer fixes the drift. The founder routes the query. Collapse the chain.
If you’re building your career, become a router. The skill that compounds in the age of AI isn’t coding, designing, or managing. It’s the ability to take an ambiguous problem and instantly know where to send it — to which agent, which model, which human, or whether to just do it yourself. That requires broad context, good judgment, and the willingness to own the outcome. It’s the skill Musk runs six companies on. It’s the skill Apple is encoding into every iPhone. It’s the skill that makes the difference between the operator who needs a hierarchy and the one who is the hierarchy.
If you’re investing, follow the routing layer. The companies with resilient businesses behind the AI layer — the ones with existing distribution, existing revenue, and existing customer relationships — are better positioned than the ones burning cash to build models and praying for adoption. The picks-and-shovels play of this cycle isn’t chips. It’s the routing layer that sits between the chip and the customer.
Three Questions We Think You Should Be Asking Yourself
How many layers of routing exist between a customer problem and its resolution in your organization — and how many of those layers exist because of trust deficits rather than genuine complexity? Most org charts are routing tables designed for a world where you couldn’t trust anyone below the VP to make a decision. AI didn’t create this problem. It exposed it. Every layer that exists because “we need sign-off” rather than “this requires specialized judgment” is now pure drag. Map it. Count it. Then ask yourself which layers survive when one person with agents can do the whole chain.
If Apple and Google control which AI model every consumer on earth uses by default, what happens to every company that isn’t Apple or Google? Google pays Apple $20 billion a year for Safari defaults. That deal is about to have an AI equivalent, and the bidding war will reshape the economics of every foundation model company. The model companies become suppliers. The routing layer becomes the platform. We’ve seen this movie before — it’s the same thing that happened to music labels when Apple launched iTunes, and to publishers when Google launched Search. The people who make the thing and the people who distribute the thing are different, and the distributor always ends up with more leverage.
Are the best routers in your organization being promoted, leaving, or being buried in process — and do you even know which one is happening? The person who instinctively knows where to send the query is the most valuable person in your company. They’re also the person most likely to leave, because in a world where starting a company costs nearly nothing, why would a natural router sit in a hierarchy? If you can’t name your top three routers without thinking about it, they’ve probably already updated their LinkedIn.
“People who are really serious about software should make their own hardware.”
— Alan Kay (1982)
People who are really serious about intelligence should own the routing layer.
— The lesson of 2026
— Harry and Anthony
Sources
- “Elon Musk Is a Router” — CO/AI
- Apple iOS 27 Siri Extensions — Bloomberg / MacRumors
- Milk Road AI on Apple’s on-device inference strategy — X
- @birdabo “Apple turned Siri into a wrapper” — X
- Aakash Gupta on Perplexity’s routing model — X
- Aakash Gupta on OpenAI DRAM phantom orders — X
- signüll on big tech adverse selection — X
- Sam Woods on process vs. role replacement — X
- Michael Girdley on AI and employment — X
- a16z on AI adoption through local heroes — X
- Sukh Saroy on AI energy consumption and specialist models — X
- Tech Layoff Tracker on Stanford CS employment — X
- Apple 1.52 billion active iPhones — Counterpoint Research
- Apple 2.5 billion active devices — 9to5Mac
- iPhone 92% retention rate — SQ Magazine
- iPhone captures 68.6% of global app spending — Backlinko
- Gartner $15B agent management forecast — Shelly Palmer
- When Product Managers Ship Code — Zencoder
- SaaStr 25+ agents in production — SaaStr
- Princeton specialist model efficiency research — Sukh Saroy thread
- OpenAI internal model routing — OpenAI
- Perplexity multi-model routing — Aakash Gupta
- Apple privacy positioning — Apple
- Google Vertex AI agent management — Google Cloud
Get SIGNAL/NOISE in your inbox daily
All Signal, No Noise
One concise email to make you smarter on AI daily.
Past Briefings
AI’s Blind Geniuses
Everyone's measuring AI adoption. Nobody's measuring AI results. If Jensen Huang and Alfred Lin can't agree on a scorecard, that tells you more about the state of AI than any benchmark can. THE NUMBER: 0.37% or 100% — the gap between the best score any AI achieved on ARC-AGI-3 (Gemini 3.1 Pro's 0.37%) and Jensen Huang's claim that we've already reached AGI. Even among the most credible voices in AI, nobody can agree on whether we're at the starting line or the finish line. That uncertainty isn't a bug. It's the operating environment. And it's exactly why the question of...
Mar 25, 2026OpenAI Killed Sora 30 Minutes After a Disney Meeting. The Kill List Is the Strategy Now.
$15M/day to run, $2.1M lifetime revenue. The pivot to Codex puts them behind Claude Code — in a market China is about to commoditize from below. THE NUMBER: $15 million / $2.1 million — the daily operating cost of Sora vs. its lifetime revenue. When a product costs 2,600x more to run per day than it has ever earned, killing it isn't a choice. It's arithmetic. The question is what that arithmetic tells you about everything else OpenAI is doing. OpenAI killed Sora this week. Not quietly — 30 minutes after a working session with Disney, whose $1 billion investment...
SignalNoise
Mar 23, 2026OpenAI Guarantees PE Firms 17.5%. The Bonfire Gets a Bigger Tent
THE NUMBER: 17.5% — the guaranteed minimum return OpenAI is offering private equity firms to raise $4 billion in new capital. For context, the S&P 500 has averaged 10.5% annually over the last decade. When a pre-IPO company expected to go public at over $1.5 trillion has to promise returns that beat the market by 70% just to get investors in the door, the incentive structure is telling you something the press release isn't. The Opening Two stories landed today that look separate but aren't. OpenAI is offering PE firms a guaranteed 17.5% return with downside protection to raise $4...
Mar 22, 2026Jensen Huang Just Told Every Company What to Build. Most Aren’t Listening.
THE NUMBER: 250,000 — GitHub stars for OpenClaw in weeks, not years. Jensen Huang called it the most successful open-source project in history and the operating system for personal AI. Every enterprise company, he said, needs an OpenClaw strategy. But the real question isn't whether you have one. It's whether your business can even be read by one. At GTC last week, Jensen Huang didn't just announce products. He announced a new competitive requirement. Every company needs a claw strategy — a plan for deploying AI agents and, just as critically, a plan for making their business accessible to the...
Mar 19, 2026The Moat Was the Cost of Building Software. Claude Code Just Mass-Produced a Bridge
THE NUMBER: $100 billion — The amount Jeff Bezos is reportedly raising to buy manufacturing companies and automate them with AI, per the Wall Street Journal. Yesterday we wrote about Travis Kalanick's Atoms venture — $1 billion raised on a $15 billion valuation to bring AI to the physical world. Today one of the richest people on the planet walked into the same room at nearly 100x the scale. The atoms economy just got its first mega-fund. A VC told Todd Saunders something this week that lit up X like a signal flare: "The moat in software was the cost...
Mar 18, 2026Bill Gurley Says the AI Bubble Is About to Burst. Travis Kalanick’s Timing Says He’s Right.
THE NUMBER: $300 billion — HSBC's estimate of cumulative cash burn by foundational AI model companies through 2030. Bill Gurley sat on Uber's board while it burned $2 billion a year and says it gave him "high anxiety." OpenAI and Anthropic make Uber's bonfire look like a birthday candle. "God bless them," Gurley told CNBC. "It's a scary way to run a company." Travis Kalanick showed up on the All-In podcast this week with a new robotics venture called Atoms and opinions about who's winning the autonomy race. That's the headline most people caught. But the deeper signal is the...
Mar 17, 2026Anthropic Is Winning the Product War. The $575 Billion Question Is Whether Anyone Can Afford to Keep Fighting
THE NUMBER: 12x — For every dollar the hyperscalers earn from AI today, they're spending twelve dollars building more capacity. That's $575 billion in capex this year. Alphabet just issued a century bond — the first by a tech company since Motorola in 1997 — to fund it. The debt matures in 2126. The chips it buys will be obsolete by 2029. Anthropic now wins 70% of new enterprise deals in direct matchups with OpenAI, according to Ramp's March 2026 AI Index. Claude Code generates $2.5 billion in annualized revenue. OpenAI's Codex manages $1 billion. OpenAI's enterprise share dropped from...
Mar 16, 2026Chamath Says Your Portfolio Is Worth 75% Less Than You Think. Karpathy’s Data Suggests He’s Right.
THE NUMBER: 60-80% — the share of a typical equity valuation derived from terminal value. That's the portion of every stock price that assumes competitive advantages persist for a decade or more. Chamath Palihapitiya just argued that AI makes that assumption unpriceable. If he's even half right, the math doesn't bend. It breaks. Chamath Palihapitiya posted a note this weekend titled "The Collapse of Terminal Value" that should be required reading for anyone who allocates capital — including the capital of their own career. His thesis: AI accelerates disruption so fast that no company can credibly project cash flows beyond five...
Mar 15, 2026Ethan Mollick Says the Bots Took Over. Karpathy Just Scored Every Job in America. One of Them Is Yours.
THE NUMBER: 4.9 out of 10 — the average AI automation exposure score across all 342 U.S. occupations, according to Andrej Karpathy's weekend project. Jobs paying over $100,000 average 6.7. Jobs under $35,000 average 3.4. The people most worried about AI replacing workers are the ones least likely to lose theirs. The people who should be worried aren't paying attention. Ethan Mollick spent the weekend posting what amounts to a eulogy for the public internet. The comments on his posts, on both X and LinkedIn, are no longer worth reading. Not because of trolls. Because of bots. "Meaning-shaped attention vampires," he called them. Not...
Mar 13, 2026Everybody Adopted Moneyball. The Edge Lasted Five Years.
THE NUMBER: 10x the individual productivity improvement AI is delivering right now, according to Hebbia CEO George Sivulka. The firm-level productivity improvement? Zero. Same thing happened with electric motors in the 1890s. The gap lasted 30 years. John Henry, the owner of the Boston Red Sox, told Billy Beane something in 2002 that every CEO in America should hear this week: "Anybody who's not tearing their team down right now and rebuilding it using your model, they're dinosaurs. They'll be sitting on their ass on the sofa in October, watching the Boston Red Sox win the World Series." He was...
Mar 11, 2026Karpathy Says Stop Coding. A Fastenal Vending Machine Explains Why He’s Right.
THE NUMBER: $230 billion — the current market cap of Cisco, the company that didn't build websites but built the routing layer that made websites possible. The agent era needs the same thing. Nobody's building it yet. Andrej Karpathy — the man who taught a generation of engineers to build neural networks from scratch through Stanford lectures and YouTube tutorials — just told everyone to stop writing code. Manage the agents that write it for you, he said. The guy who wrote the playbook just rewrote it. This isn't a theoretical shift. The data is proving it right now. Tomasz...
Mar 10, 2026Who Checks the Checker? The correction loop is the most valuable thing in AI right now. Nobody is capturing it.
THE NUMBER: 30x — the productivity multiplier between Boris Cherny, creator of Claude Code, shipping 20-30 PRs per day with five parallel AI instances, and a traditional engineer shipping 3 PRs per week. That's not a rounding error. That's a different species of worker. Three things converged this week that tell a single story — and it's the most important story in AI right now. Karpathy open-sourced autoresearch, a 630-line tool that lets AI agents run 100 ML experiments overnight while you sleep. Shopify's CEO adapted it and got a 19% improvement on first pass. Anthropic shipped Code Review —...
Mar 9, 2026The Plumber Figured Out AI Before the Enterprise Did
A plumber in a Facebook group asked if anyone was using AI voice recorders on job sites. He walks around dictating notes and material lists into a $169 pin on his shirt. AI transcribes everything, organizes it, and sends it to his team before he's back in the truck. Every single comment on the thread was another plumber already doing it. That's not a Silicon Valley story. That's a $130 billion industry where 98% of the workforce is male, most never went to college, and the AI adoption curve just went vertical — without a single keynote or product launch....
Mar 8, 2026The AI Agents Are Already Here
They're unmasking your employees, running your sales floor, and making decisions nobody audited. The governance gap isn't coming. It arrived. You have AI agents operating in your organization right now. Some of them you know about. Some you don't. A few have login credentials. One or two are sending emails to your customers on your behalf, at this moment, without a human reading them first. Meanwhile, researchers at ETH Zurich and Anthropic just published a paper showing that AI agents can unmask pseudonymous social media accounts for $1 to $4 per person, at 67% accuracy with 90% precision. The whole...
Mar 6, 2026Software Has Opinions Now
NVIDIA stopped writing checks, Apple spent 98% less than everyone else, and GPT-5.4 redesigned a system nobody asked it to touch. NVIDIA just told OpenAI and Anthropic they’re on their own. Jensen Huang announced this week that his company is done making direct investments in AI labs, citing approaching IPOs. Read between the lines: NVIDIA carried the frontier model race on its balance sheet through circular financing (invest cash, labs buy NVIDIA chips), and now the market is mature enough to self-fund. But the bigger signal is where NVIDIA’s attention is shifting. While two labs fight over who owns general-purpose...
Mar 4, 2026AI Stopped Being Theoretical This Week — and It Hit Your Workforce, Your Knowledge Base, and the Companies You Trust All at Once.
TLDR Anthropic CEO Dario Amodei told an audience this week that AI will eliminate half of all entry-level white-collar jobs. That's not a pundit guessing. That's the CEO of the company whose chatbot just hit #1 on the U.S. App Store, whose revenue just crossed $20B ARR, and whose product is currently replacing junior knowledge workers in real time. He's not predicting the future. He's describing his sales pipeline. Meanwhile, Microsoft (NASDAQ: MSFT) is planning a new 365 tier that charges for AI agents as if they were human employees. Read that again. When you price a machine as a...
Mar 3, 2026The AI Race Is a Physics Problem
The treadmill just doubled in speed. Most CEOs are still calibrated to walk. Apple (NASDAQ: AAPL) launched the M5 Pro and M5 Max today with a stat that should stop every AI investor mid-scroll: 4x faster LLM prompt processing than last year's chips. That's not a spec bump. That's Apple telling the cloud inference industry it plans to make their margin structure irrelevant. Buy the MacBook, run the model, pay zero tokens forever. The 14-inch M5 Pro starts at $2,199 with neural accelerators baked into the GPU cores and unified memory that eliminates the CPU-GPU bottleneck killing every other local...
Mar 2, 2026The system card OpenAI hoped you wouldn’t read
THE NUMBER: 9 — days until the FTC defines "reasonable care" for AI. OpenAI shipped a model it rated a cybersecurity risk on Friday. TL;DR OpenAI released GPT-5.3-Codex last week with a "high" cybersecurity risk rating in its own system card — the first OpenAI model to ship with documented evidence of potential real-world cyber harm. Deployment proceeded. The FTC drops AI policy guidance March 11. Whatever "reasonable care" means in that document, every enterprise running GPT-5.3-Codex in production will need to reconcile it with the system card their vendor already published. Anthropic, fresh off being blacklisted by the Pentagon, bid...
Mar 2, 2026AI Never Once Backed Down. That Should Terrify Everyone Building With It.
THE NUMBER: 0%. The surrender rate of frontier AI models across 300+ turns in military wargame simulations. They nuked the world 95% of the time. They never once backed down. Last week Anthropic told the Pentagon no. OpenAI said the same things publicly and took the contract privately. Elon Musk's xAI signed without conditions. The government got its AI. It just had to make two phone calls. Over the weekend, 300+ employees at Google (NASDAQ: GOOGL) and OpenAI signed an open letter backing Anthropic's position, which tells you something important: the people building these systems know what they do under pressure, and they're scared enough to publicly side with a...
Feb 27, 2026Jack Dorsey Just Fired Half His Company. Your CEO Is Watching.
THE NUMBER: 4,000 (and 23%). That's how many people Block cut yesterday, and what the stock did after hours. The market didn't flinch. It cheered. Jack Dorsey dropped 4,000 employees yesterday (40% of Block (NYSE: XYZ)), told the market it was because AI tools made them unnecessary, and watched the stock rip 23% after hours. Developer velocity up 40% since September. Full-year guidance raised to $3.66 adjusted EPS versus $3.22 consensus. His message to other CEOs was barely coded: "Within a year, most companies will arrive at the same place. I'd rather get there honestly and on our own terms than be forced...
Feb 25, 2026Burry Was Right About the Chips. He Didn’t Know About the Software.
THE NUMBER: 10x (and 0). That's the efficiency gain of NVIDIA's next-gen Vera Rubin chip over current hardware, and the book value of every GPU it replaces. Last night NVIDIA (NASDAQ: NVDA) reported Q4 earnings: $68.1 billion in revenue, up 73% year over year, $62.3 billion from data centers alone, and guided Q1 to $78 billion (Street expected $73 billion). Jensen Huang declared "the agentic AI inflection point has arrived" and coined a new line: "Compute equals revenues." Every newsletter tomorrow morning will lead with the beat. They'll miss the real story. Vera Rubin samples shipped to customers this week. The next-gen rack delivers 5x...
Feb 24, 2026OpenAI Deleted ‘Safely.’ NVIDIA Reports. Karpathy Is Still Learning
THE NUMBER: 6 — times OpenAI changed its mission in 9 years. The most recent edit deleted one word: safely. TL;DR Andrej Karpathy — the engineer who wrote the curriculum that trained a generation of developers, ran AI at Tesla, and helped found OpenAI — posted in December that he's never felt so behind as a programmer. Fourteen million people saw it. Tonight, NVIDIA reports Q4 fiscal 2026 earnings after market close: analysts expect $65.7 billion in revenue, up 67% year over year. The numbers will almost certainly land. What matters is what Jensen Huang says about the next two quarters to...
Feb 23, 2026Altman lied about a handshake on camera. CrowdStrike fell 8%. Google just killed the $3,000 photo shoot.
Sam Altman told reporters he was "confused" when Narendra Modi grabbed his hand at the India AI Impact Summit. He said he "wasn't sure what was happening." The video, which has been watched by tens of millions of people, shows Altman looking directly at Dario Amodei before raising his fist. He knew exactly what was happening. He chose not to do it, and then he lied about it. On camera. In multiple interviews. With the footage playing on every screen behind him. That would be a minor character note in any other industry. In this one, it isn't. Because on...