×
Nvidia CEO: Reasoning AI needs 100x more compute, contradicting market fears
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Nvidia CEO Jensen Huang’s recent clarification about DeepSeek’s new reasoning AI model reveals a significant shift in understanding AI computing requirements. Contrary to initial market reactions that caused a massive tech stock selloff, Huang explains that advanced reasoning models actually demand substantially more computational power than previously estimated—reinforcing Nvidia’s position in the high-performance computing market rather than undermining it. This revelation has important implications for the future of AI infrastructure investment and validates Nvidia’s strategic focus on building more powerful computing systems.

The big picture: DeepSeek’s R1 model represents a fundamental advancement in AI as “the first open-sourced reasoning model,” according to Huang, but will require 100 times more computing power than non-reasoning AI—contradicting initial market assumptions.

Market misinterpretation: In January 2024, DeepSeek’s model triggered a massive AI stock selloff that wiped $600 billion from Nvidia’s market value in a single trading session.

  • Investors initially feared the Chinese startup’s model could match competitor performance while using significantly less energy and computational resources.
  • Huang clarified that the opposite is true, suggesting reasoning AI will drive even greater demand for high-performance computing infrastructure.

What they’re saying: “This reasoning AI consumes 100 times more compute than a non-reasoning AI,” Huang told CNBC’s Jim Cramer, calling the model “fantastic” while emphasizing it reached “the exact opposite conclusion that everybody had.”

Behind the numbers: Reasoning models like DeepSeek’s R1 are computationally intensive because they can break down problems step-by-step, generate multiple potential answers, and verify their own accuracy.

  • These capabilities require substantially more processing power than previous AI architectures focused primarily on pattern matching or text generation.
  • The increased computational demands align with Nvidia’s core business of producing high-performance AI chips and infrastructure.

Looking ahead: Huang predicts global computing capital expenditures will reach one trillion dollars by decade’s end, with the majority dedicated to AI infrastructure.

  • “Our opportunity as a percentage of a trillion dollars by the end of this decade is quite large,” Huang said, adding, “We’ve got a lot of infrastructure to build.”
  • Nvidia is expanding its AI offerings beyond chips, announcing new infrastructure for robotics and enterprise applications with partners including Dell, HPE, Accenture, ServiceNow and CrowdStrike.
Nvidia's Jensen Huang on why DeepSeek's new model will need '100 times more computing'

Recent News

Two-way street: AI etiquette emerges as machines learn from human manners

Users increasingly rely on social niceties with AI assistants, reflecting our tendency to humanize technology despite knowing it lacks consciousness.

AI-driven FOMO stalls purchase decisions for smartphone consumers

Current AI smartphone features provide limited practical value for many users, especially retirees and those outside tech-focused professions, leaving consumers uncertain whether to upgrade functioning older devices.

Copilot, indeed: AI adoption soars in aerospace industry

Advanced AI systems now enhance aircraft design, automate navigation, and predict maintenance issues, transforming operations across the heavily regulated aerospace sector.