×
OpenAI and Oracle seek billions in Nvidia chips for Stargate
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

OpenAI and Oracle‘s massive AI infrastructure project is advancing rapidly with their first Stargate venture data center in Texas. The $100 billion partnership aims to deploy 64,000 of Nvidia‘s advanced GB200 chips by the end of 2026 in Abilene, with the initial 16,000 chips expected to be operational by summer. This deployment represents one of the most significant AI computing infrastructure investments to date and highlights the growing competition for computational resources among tech giants.

The big picture: OpenAI and Oracle are accelerating the deployment of their $100 billion Stargate infrastructure venture with a massive data center in Abilene, Texas that will house tens of thousands of Nvidia’s most advanced AI chips.

  • The facility is being prepared to accommodate 64,000 of Nvidia’s GB200 semiconductors by the end of 2026, according to a person familiar with the plans.
  • The chips will be installed in phases, with the first 16,000 units scheduled to be operational by summer 2025.

Why this matters: This deployment represents one of the largest concentrations of cutting-edge AI computing power in a single location, underscoring the enormous computational resources required for advanced AI development.

  • The scale of investment highlights the strategic importance tech giants are placing on securing AI computing capacity amid increasing competition and chip shortages.
  • Oracle’s partnership with OpenAI potentially positions the cloud provider as a more significant competitor to AWS, Google Cloud, and Microsoft Azure in the AI infrastructure space.

Behind the numbers: The 64,000 GB200 chips would provide OpenAI with unprecedented computing capacity for training and running increasingly sophisticated AI models.

  • Nvidia’s GB200 chips represent the company’s most advanced AI accelerators, designed specifically for large language model training and inference.
  • The phased deployment approach suggests a strategic rollout that balances immediate needs with long-term scaling, allowing for adjustments as technology evolves.

Reading between the lines: The significant investment in physical infrastructure signals OpenAI’s commitment to a hardware-intensive approach to AI advancement despite emerging research on more efficient AI training methods.

  • The location in Abilene, a smaller Texas city, likely offers advantages in terms of energy costs, land availability, and potential tax incentives crucial for such massive data centers.
  • The partnership structure between a leading AI lab and a major cloud provider could become a template for future industry collaborations as AI computing demands continue to grow.
OpenAI, Oracle Eye Nvidia Chips Worth Billions for Stargate Site

Recent News

Scaling generative AI 4 ways from experiments to production

Organizations face significant hurdles when moving generative AI initiatives from experimentation to production-ready systems, with most falling short of deployment goals despite executive interest.

Google expands Gemini AI with 2 new plans, leak reveals

Google prepares to introduce multiple subscription tiers for Gemini, addressing the gap between its free and premium AI offerings.

AI discovers potential Alzheimer’s cause and treatment

AI identifies PHGDH gene as a direct cause of Alzheimer's disease beyond its role as a biomarker, offering a new understanding of spontaneous cases and potential treatment pathways.