Some say the circular nature of the deals is ‘bull,’ while others say they’re the best way to keep pace with the breakneck speed of AI infrastructure’s development.
There’s daily debate now about the interconnectedness of Nvidia and OpenAI deals, but is it the best way to keep pace with AI infrastructure innovation. Should we care if Nvidia invests in OpenAI, who buys cloud computing from Oracle, whose chips are from Nvidia, who has a stake in CoreWeave, who supplies AI infrastructure to OpenAI?
Not long after our who’s-who roundup of recent AI infrastructure deals, an illustrative graphic and video by NBC News and The Financial Times crystallized the web of interrelationships that have spun from the $100 billion Nvidia – OpenAI partnership. The biggest so far — enumerated in the list, below —have raised each company’s valuations to historic levels, and stirred controversy about whether we are headed toward a “bubble,” or simply setting the stage for returns that will exceed heavy up-front investments. Goldman Sachs, for example, estimates AI could unlock $8 trillion in new revenue in the U.S. alone. Bain & Company, on the other hand, projects a shortfall possibly as big as $800 billion if $2 trillion of annual revenue isn’t earned by 2030.
According to Accadian Asset Management’s “straight talk” blog, the interconnectedness of a small number of companies relying on each other for capital and compute isn’t necessarily dubious, but rather a way for the most innovative to achieve the breakneck growth they say is necessary to meet AI infrastructure demand. That’s the rub, isn’t it? Is the demand for AI compute overstated by those companies who stand to benefit most, or, are reasoning inference models and widespread AI adoption evolving fast enough to justify the buildout plans — especially if AI can accelerate scientific discovery, transform industries and augment human capabilities, possibly compressing decades of progress into a small timeframe?
AI leaders at companies like CoreWeave contend the deals are not “circular,” but rather a necessity for meeting the capital, compute and fundamental architectural needs of AI innovation. In the theory of accelerating returns, AI could trigger more innovation in the next 10 years than we’ve seen in the last 100 years, possibly similar to the Industrial Revolution and the consolidation of wealth that took place around steel, oil, and railroads.
What’s different in the AI Revolution is the normalization of companies investing in the suppliers from which they buy or to which they sell. Is the reciprocal-financing risk offset by the fact key stakeholders have more capital behind them than most steel or railroad magnates had? Though John D. Rockefeller had a greater percentage of the U.S. economy than any billionaire today (at around 2.3% of GDP), the risks today’s biggest AI innovators take are largely their own, and that of their shareholders and investors.
Just this week, the International Monetary Fund’s chief economist, Pierre-Olivier Gourinchas, said during IMF and World Bank meetings in Washington, a bubble or a bust “won’t be a systemic crisis” like in the .com era, which was financed by debt and tied directly to the broader financial and banking system. Because very cash-rich companies, as well as shareholders and equity holders are the ones taking the risk, they are the ones who may lose, or win.
The areas of AI infrastructure in which the “little guy” is most at risk would be in “local issues” like water rights, electricity costs, land use, air quality, noise levels, roads, and safety. That’s where each state, and municipality, determines if the distribution of jobs and wealth benefits or harms local economies and lives. Those are the decisions being made in across Virginia, Texas, California, Illinois, Oregon, Arizona, and other places with existing or planned data center projects.
It’s important to keep an eye on growing investor exuberance while also recognizing the need for buildouts that could fuel the next industrial revolution. The situation is complex, and we will continue to report on emerging partnerships and trends in AI infrastructure.
The biggest deals so far
- Nvidia and OpenAI (the latter of which has $1 trillion in deals for this year), with Nvidia investing up to $100 billion in OpenAI, a large portion of which might be for leasing Nvidia’s GBUs;
- OpenAI to pay Oracle $300 billion for computer infrastructure over 5 years – part of $500B data center buildout, Stargate, of which Japan’s SoftBank is also part);
- OpenAI $22 billion deal with CoreWeave for use of its data centers, which are packed with Nvidia GBUs;
- OpenAI inked a deal with Google, but for an undisclosed amount;
- OpenAI also forged a relationship with Broadcom to develop and deploy racks of OpenAI designed chips, and the amount is again, undisclosed;
- Microsoft has also invested approximately $14B in OpenAI since 2019;
- OpenAI also had a recent deal with AMD, where OpenAI agrees to purchase AMD chips in exchange for a substantial equity investment that could amount to a 10% stake in AMD over time;
- Nvidia became a stakeholder in CoreWeave, amending a previous deal so that it could buy $6.3 billion of unsold computing cloud capacity through 2032; CoreWeave then gets most of GPUs that it then rents out to customers from Nvidia;
- In the Stargate project, Oracle agreed to purchase about $40B of Nvidia chips to build a DC for OpenAI;
- SoftBank also has $3 billion stake in Nvidia;
- Meta invested $14 billion in Scale AI; $10 billion to Google for use of Google cloud servers; and $14 billion to CoreWeave for their AI cloud infrastructure.