OpenAI and Microsoft reportedly planning $100 billion datacenter project for an AI supercomputer

The OpenAI logo is being displayed on a smartphone, with the Microsoft logo visible on the screen in the background, in this photo illustration taken in Brussels, Belgium
(Image credit: Getty Images)

Microsoft and OpenAI are reportedly working on a massive datacenter to house an AI-focused supercomputer featuring millions of GPUs. The Information reports that the project could cost "in excess of $115 billion" and that the supercomputer, currently dubbed "Stargate" inside OpenAI, would be U.S.-based. 

The report says that Microsoft would foot the bill for the datacenter, which could be "100 times more costly" than some of the biggest operating centers today. Stargate would be the largest in a string of datacenter projects the two companies hope to build in the next six years, and executives hope to have it running by 2028.

OpenAI and Microsoft are building these supercomputers in phases, the report says, and that Stargate would be a phase 5 system. A phase 4 system would cost less and may launch as soon as 2026, The Information's sources say, and may be looking to start in Mt. Pleasant, Wisconsin. The system could require several Stargate could need so much power ("at least several gigawatts") that Microsoft and OpenAI are considering alternative sources of power, such as nuclear.

Sources suggested a datacenter of this scale would be challenging, partially because the existing designs require "putting many more GPUs into a single rack than Microsoft is used to, to increase the chips' efficiency and performance." That means also devising novel ways to keep everything cool.

It sounds like the companies are also potentially using this phase of design to move away from reliance on Nvidia. The report claims that OpenAI wants to avoid using Nvidia's InfiniBand cables in Stargate, even though Microsoft uses them in current projects. OpenAI claims it would rather use Ethernet cables.

Much is still to be determined, so it seems like the price and plans could all change and it's unclear when details will be finalized. The Information also states that it has yet to be determined where this computer will be located, and whether it will be built in a single datacenter or in "multiple datacenters in close proximity."

Earlier this year, reports stated that OpenAI CEO Sam Altman had ambitions to build AI chips and was looking to raise as much as $7 trillion to build foundries to produce them. Last year, Microsoft revealed its 128-core Arm datacenter CPU and Maia 100 GPUs specifically for AI projects. There have also been reports of Microsoft developing its own networking gear for AI datacenters. As AI has taken off, Nvidia's GPUs have been in high demand,  so it makes sense that companies like Microsoft and OpenAI might want to have some other options.

"We are always planning for the next generation of infrastructure innovations needed to continue pushing the frontier of AI capability," Microsoft chief communications officer Frank Shaw told The Information, though he apparently did not comment directly on the supercomputing plans.

Microsoft has poured billions of dollars into its partnership with OpenAI, largely in the form of computing power to run its models. If Stargate or something like it comes to pass, the partnership will only grow deeper as the investments get larger — and more complicated.

Andrew E. Freedman is a senior editor at Tom's Hardware focusing on laptops, desktops and gaming. He also keeps up with the latest news. A lover of all things gaming and tech, his previous work has shown up in Tom's Guide, Laptop Mag, Kotaku, PCMag and Complex, among others. Follow him on Threads @FreedmanAE and Mastodon @FreedmanAE.mastodon.social.

  • Evildead_666
    Maybe the next one after Stargate will be colloquially called SkyNet.
    Or maybe thats Stargates' Codename...
    Reply
  • DougMcC
    Author please rewrite this sentence: "The system could require several Stargate could need so much power ("at least several gigawatts") that Microsoft and OpenAI are considering alternative sources of power, such as nuclear."
    Reply
  • Evildead_666
    Yes, it should be written Nukiller :) lol

    WarGames is back in town
    Reply
  • vanadiel007
    All you need is Yoda, who has more wisdom than AI has GPU's.
    Reply
  • Diogene7
    Although I understand the push to bigger and bigger infrastructure for AI, I really don’t understand why Microsoft, Google, Amazon,… persist in using energy inefficient silicon transistors.

    At this stage, it seems obvious that there is a need to scale-up spintronics MRAM manufacturing (Avalanche technolog, Everspin,…) because it is a key step in enabling spintronics technology which is a beyond CMOS technology that could enable much more energy efficient digital logic computing , while having the great advantage to also be more amenable to AI.
    Beyond CMOS technology like spintronics and Non-Volatile-Memory MRAM (SOT-MRAM, VCMA MRAM,…) would help enable things like Intel MESO concept, and I guess current AI infrastructure could actually be used to fastrack MRAM R&D (to find the right combination of materials,…).
    Reply
  • Findecanor
    For some reason, the comments to this article are just as insane as the topic of the article .... :-þ
    Reply
  • brandonjclark
    Nothing says sustainability like a $100 billion dollar datacenter.

    Gotta love it..
    Reply
  • scottsoapbox
    Diogene7 said:
    Although I understand the push to bigger and bigger infrastructure for AI, I really don’t understand why Microsoft, Google, Amazon,… persist in using energy inefficient silicon transistors.

    At this stage, it seems obvious that there is a need to scale-up spintronics MRAM manufacturing (Avalanche technolog, Everspin,…) because it is a key step in enabling spintronics technology which is a beyond CMOS technology that could enable much more energy efficient digital logic computing , while having the great advantage to also be more amenable to AI.
    Beyond CMOS technology like spintronics and Non-Volatile-Memory MRAM (SOT-MRAM, VCMA MRAM,…) would help enable things like Intel MESO concept, and I guess current AI infrastructure could actually be used to fastrack MRAM R&D (to find the right combination of materials,…).
    Because they want to build this TODAY.
    Reply
  • watzupken
    Stargate? It sounds like it requires as much power as those "Stargate" that we see in movies. I wonder how much power we use to power all these power hungry hardware globally. I seriously wonder if we can ever win the climate change "war" or just some wishful thinking.
    Reply
  • Diogene7
    scottsoapbox said:
    Because they want to build this TODAY.

    Sure I understand that, but they should actually ALSO at the same time, significantly increase the budget allocated to scale-up manufacturing of emerging spintronics related technology (like Non-Volatile-Memory MRAM which is likely the low hanging fruit as a 1st step) because it is a key enabler for much lower power digital logic and AI computing.

    It is like all those companies were persisting to use vacuum tubes, when silicon transistors was the emerging next generation lower power computing technology : sure you can develop a supercomputer with vacuum tubes, but it would be very, very energy inefficient.

    The CHIPS Act should actually allocate much, much more funding to scale-up High Volume Manufacturing (HVM) of beyond CMOS spintronics related technologies like MRAM, or Intel MESO concept as it would provide the US a unique opportunity to regain leadership by investing early in the next generation computing technology. It is what China did with batteries, EVs, solar panels,… which they are now the leaders…
    Reply