OpenAI and AMD to deploy 6GW of GPUs under multi-year deal

OpenAI will use AMD as a core compute partner for large-scale AI workloads, extending an existing collaboration

In sum – what to know:

6GW of AMD GPUs – The multi-year agreement begins with a 1GW deployment of MI450 GPUs in 2026, expanding to 6GW across future generations.

Equity incentives – OpenAI receives up to 160 million AMD shares, vesting as deployment, stock price, and performance milestones are met.

Multibillion upside – AMD projects “tens of billions” in revenue and strong earnings growth from the partnership’s large-scale AI infrastructure rollout.

AMD and OpenAI have announced a long-term strategic partnership to deploy up to six gigawatts of AMD Instinct GPUs to power OpenAI’s next-generation AI infrastructure, the latter said in a release. The collaboration spans multiple hardware generations, beginning with an initial 1GW deployment of AMD Instinct MI450 GPUs scheduled for the second half of 2026, the U.S. AI firm said.

Under the terms of the agreement, the AI company will use AMD as a core compute partner for large-scale AI workloads, extending an existing collaboration that began with the MI300X and MI350X GPU series. The pair also plans to jointly optimize hardware and software roadmaps to support expanding AI model demands.

Under the deal, AMD also granted OpenAI a warrant for up to 160 million shares of AMD common stock, vesting as OpenAI scales deployments from 1GW to 6GW. The warrant structure ties milestones not only to deployment targets but also to AMD’s share price and OpenAI’s technical and commercial progress.

Sam Altman, co-founder and chairman of OpenAI, said: “This partnership is a major step in building the compute capacity needed to realize AI’s full potential, adding that AMD’s expertise in high-performance chips will enable OpenAI to “accelerate progress and bring the benefits of advanced AI to everyone faster.”

Jean Hu, executive vice president and chief finance officer at AMD, said: “This partnership is expected to deliver tens of billions of dollars in revenue for AMD while accelerating OpenAI’s AI infrastructure buildout.”

OpenAI is getting ready to begin large-scale production of its own artificial intelligence chips next year, in partnership with compatriot semiconductor company Broadcom, according to a recent report in the Financial Times. It noted that the initiative is aimed at easing the firm’s reliance on U.S. chipmaker Nvidia hardware while meeting surging demand for computing capacity to train and run AI models.

The chips are expected to be used internally rather than sold to external customers, according to the report.

Related posts

Elea secures deal for AI data center hub in Brazil

US approves Nvidia chip sales for projects in UAE

PPPs, new hotspots define the next phase of AI infra: KPMG

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Read More