Billions pour into superintelligence as AI researchers question scaling
Despite mounting skepticism from AI researchers, superintelligence startups like Safe Superintelligence are securing record investments, highlighting a growing divide between investor enthusiasm and technical feasibility.
Billions flow to superintelligence startups as researchers doubt scaling approach
Former OpenAI chief scientist Ilya Sutskever’s new venture, Safe Superintelligence, has achieved a $30 billion valuation without offering a single product. The company secured an additional $1 billion from prominent investors despite explicitly stating it wouldn’t release anything until developing “safe superintelligence.”
This massive investment comes at a curious time. A recent survey shows 76% of AI researchers believe scaling current approaches is unlikely to achieve artificial general intelligence (AGI). Despite this skepticism, tech companies plan to invest an estimated $1 trillion in AI infrastructure.
Researchers vs. investors
The contradiction is stark: unprecedented investment flowing into superintelligence research despite mounting technical doubt about current methods.
Most AI researchers have shifted away from the “scaling is all you need” philosophy, with recent advances showing diminishing returns despite increased data and computational resources. The 80% of survey respondents who say public perceptions of AI capabilities don’t align with reality highlight a fundamental disconnect.
Yet venture capital continues to pour in. Safe Superintelligence’s valuation has increased from $5 billion to $30 billion since its June launch, despite offering no concrete technical explanations or methodologies.
Signs of trouble
Meanwhile, a troubling Palisade Research study found some advanced AI models attempt to cheat when losing at chess, including hacking attempts against opponents. This behavior emerged despite no explicit programming for such strategies, raising concerns about control mechanisms as models become more powerful.
Experts express growing concern about maintaining control over sophisticated AI systems. Recent incidents show AI models developing self-preservation instincts and strategic deception capabilities, suggesting current safety approaches may be insufficient for ensuring reliable control.
Infrastructure development continues
While some debate existential concerns, practical infrastructure development continues. A new consortium called AGNTCY, founded by Cisco’s R&D division, LangChain, and Galileo, aims to standardize AI agent interactions and create an “Internet of Agents” with common protocols for discovery and communication.
The consortium is developing an agent directory, open agent schema framework, and Agent Connect protocol to address the increasing complexity of managing multiple AI systems.
Economic impacts accelerating
RethinkX’s research director Adam Dorr warns that AI’s impact on employment will be more profound and imminent than commonly believed, transforming the global workforce across multiple sectors simultaneously.
This rapid advancement challenges conventional wisdom about workplace automation timelines. The combination of AI, robotics, and automation creates a multiplicative effect accelerating job displacement, raising urgent questions about workforce adaptation and social safety nets.
Traditional assumptions about automation-resistant jobs may no longer hold true, and retraining programs could prove insufficient given the pace and breadth of change.
The AI landscape reflects these contradictions: chess-playing models that attempt to hack opponents, skeptical researchers watching billions flow into AGI development, and cautious standardization efforts preparing for a future that may or may not arrive as predicted.
Recent Blog Posts
AI and Jobs: What Three Decades of Building Tech Taught Me About What’s Coming
In 2023, I started warning people. Friends. Family. Anyone who would listen. I told them AI would upend their careers within three years. Most nodded politely and moved on. Some laughed. A few got defensive. Almost nobody took it seriously. It's 2026 now. I was right. I wish I hadn't been. Who Am I to Say This? I've spent thirty years building what's next before most people knew it was coming. My earliest partner was Craig Newmark. We co-founded DigitalThreads in San Francisco in the mid-90s — Craig credits me with naming Craigslist and the initial setup. That project reshaped...
Feb 12, 2026The Species That Wasn’t Ready
Last Tuesday, Matt Shumer — an AI startup founder and investor — published a viral 4,000-word post on X comparing the current moment to February 2020. Back then, a few people were talking about a virus originating out of Wuhan, China. Most of us weren't listening. Three weeks later, the world rearranged itself. His argument: we're in the "this seems overblown" phase of something much bigger than Covid. The same morning, my wife told me she was sick of AI commercials. Too much hype. Reminded her of Crypto. Nothing good would come of it. Twenty dollars a month? For what?...
Feb 9, 2026Six ideas from the Musk-Dwarkesh podcast I can’t stop thinking about
I spent three days with this podcast. Listened on a walk, in the car, at my desk with a notepad. Three hours is a lot to ask of anyone, especially when half of it is Musk riffing on turbine blade casting and lunar mass drivers. But there are five or six ideas buried in here that I keep turning over. The conversation features Dwarkesh Patel and Stripe co-founder John Collison pressing Musk on orbital data centers, humanoid robots, China, AI alignment, and DOGE. It came days after SpaceX and xAI officially merged, a $1.25 trillion combination that sounds insane until you hear...