Nvidia and OpenAI have unveiled what they call the largest AI infrastructure project in history, a partnership to deploy 10 gigawatts of data centres powered by millions of Nvidia’s advanced chips.
The deal could see Nvidia invest up to $100bn (£73.99bn) in Sam Altman’s company, even as OpenAI spends billions of dollars buying those very same processors.
The structure has made headlines with Nvidia, the world’s most valuable company, with a market capitalisation of nearly $4.5tn, set to progressively inject capital into OpenAI as each gigawatt of computing power comes online, starting with the first phase in 2026 on its new Vera Rubin platform.
In return, OpenAI will lock itself further into Nvidia’s orbit, relying on its GPUs to train and run the next generation of artificial intelligence models.
“Everything starts with compute”, Altman announced. Nvidia boss Jensen Huang described the partnership as “the next leap forward” after ChatGPT, cementing years of close collaboration between the two tech behemoths.
‘Circular dynamics’ warnings
Markets reacted positively to the news, which sent Nvidia shares nearly four per cent higher on Monday.
Analysts at Hargreaves Lansdown estimated the project could generate $500bn in Nvidia sales if fully realised; ‘his move cements Nvidia’s position as the undisputed king of AI’, said Matt Britzman, senior equity analyst.
But others were more cautious. Stacy Rasgon from Bernstein warned of “circular dynamics” in which Nvidia funds a customer that then spends the money buying its own products.
“The size of the deal will clearly start to raise some questions”, he told CNBC.
That “infinite money loop” has already been noted by investors, as OpenAI juggles its vast infrastructure commitments with a business model that has yet to turn a profit.
Revenues are estimated at around $13bn a year, but the company has pledged 49 per cent of its profits to Microsoft in exchange for $13bn of backing, while separately agreeing a $300bn deal with Oracle to build 4.5 gigawatts of data centre capacity under the Stargate project.
The scale of the challenge
Beyond the financial engineering hurdles lies an equally daunting physical challenge.
Ten gigawatts of new data centre capacity is equivalent to the power output of ten nuclear plants, or nearly half of all utility-scale electricity generation added in the US during the first half of this year.
Even with loosening permitting, building such capacity could take years, straining energy grids already under pressure from the integration of electrification and renewables.
Altman himself admitted the “unprecedented infrastructure challenge” would involve not just securing enough chips, but also building the power supply to keep them running.
Meanwhile, the demand side of the equation remains unproven. OpenAI boasts 700 million weekly users, making ChatGPT the most popular AI product globally, but enterprise adoption has been patchy.
Reports have suggested that many corporate pilots fail to deliver measurable returns, while GPT-5 has drawn a muted response compared with the early hype.
The bigger picture
For Nvidia, the benefits are immediate. Each gigawatt deployed equates to tens of billions of dollars in chip sales, reinforcing its grip on the AI supply chain at a time when hyperscalers and startups are experimenting with custom silicon.
Locking in OpenAI as a “preferred strategic compute partner” will help stave off rivals and ensure its GPUs remain the industry standard.
For OpenAI, on the other hand, the arrangement buys time, as well as silicon.
With Microsoft, Oracle, SoftBank and others in the mix, the company is at the centre of a huge interdependent web of Big Tech and Wall Street money.
But with costs spiralling and constraints ahead, the question is whether this boom can sustain itself long enough to deliver the promised returns.