Anthropic’s $21 billion chip deal with Broadcom

Home Semiconductor News Anthropic’s $21 billion chip deal with Broadcom
Anthropic

Multi-year Broadcom partnership bypasses cloud intermediaries to lock in massive AI scale

In sum – what we know:

  • The terms of the deal – Anthropic has committed $21 billion to custom chip orders via Broadcom, securing nearly 1 million Google TPU v7p units for delivery by late 2026.
  • Broadcom’s strategy – Broadcom is pivoting to sell fully assembled “Ironwood Racks” directly to AI companies, bypassing its historical role as just a component supplier to Google.
  • Deals everywhere – This order follows a similar partnership with OpenAI, positioning Broadcom’s custom ASICs as a cost-efficient, high-performance alternative to Nvidia’s general-purpose GPUs.

Anthropic has made a relatively large AI infrastructure, locking in $21 billion worth of custom chips through Broadcom. CEO Hock Tan announced the news during Broadcom’s fiscal year 2025 earnings call, revealing that the deal covers nearly 1 million Google TPU units scheduled to arrive by late 2026. The announcement also solved a lingering mystery — Anthropic turns out to be the unnamed customer behind a $10 billion order that surfaced back in Q3 fiscal 2025.

Major AI labs are scrambling to lock down compute capacity, and the deals are stacking up fast. Broadcom inked a multi-year partnership with OpenAI back in October 2025, and is reportedly in conversations with Microsoft. For Anthropic’s part, this Broadcom arrangement slots into a broader multi-cloud infrastructure play that already spans Google TPUs, Amazon Trainium chips, and Nvidia GPUs.

The $21 Billion commitment

The order landed in two waves — $10 billion committed during Q3 fiscal 2025, then another $11 billion stacked on top in Q4. What’s notable is how Broadcom plans to deliver. Instead of as loose components, it will deliver fully assembled “Ironwood Racks,” or complete rack-level AI systems that Anthropic can drop straight into its data centers. 

The scale here is hard to overstate. Anthropic is looking at bringing over 1 gigawatt of new AI compute capacity online by late 2026. The deal also appears to connect back to Anthropic’s October 2025 cloud partnership with Google, which opened access to up to 1 million TPUs — a commitment that Broadcom’s direct supply channel is now helping fulfill.

Broadcom’s shift

Broadcom and Google go way back. Since 2016, Broadcom has been the behind-the-scenes design partner for Google’s custom silicon, quietly building the chips while Google handled the commercial side. 

The strategic bet here is on application-specific integrated circuits, also known as ASICs. These aren’t like Nvidia’s general-purpose GPUs, which dominate the market through sheer versatility. ASICs get custom-built for specific computational tasks, which means efficiency and cost advantages for the right workloads, but less flexibility if your needs change. For AI companies willing to commit to a particular architecture years out, the math can work in their favor.

Broadcom’s stock has climbed nearly sevenfold over three years, pushing the company’s market cap briefly past $1 trillion. Fiscal 2025 sales topped $63.9 billion, up more than 50% over two years, with AI revenue hitting $20 billion. The AI-related backlog alone now exceeds $73 billion.

That said, Broadcom isn’t operating in a vacuum. Nvidia still dominates AI computing, backed by an ecosystem of software tools and years of developer familiarity that custom ASIC providers can’t easily replicate. Broadcom’s model asks customers to place architectural bets years in advance — a calculated risk that won’t fit every company’s appetite.

Conclusions

The Anthropic deal doesn’t exist in isolation. Broadcom’s AI partnerships keep expanding — the October 2025 OpenAI agreement covers accelerator and networking systems over multiple years, and the company has disclosed landing a fifth “XPU customer” through a $1 billion order for late 2026 delivery. Meta is in the mix too, a relationship that gained another dimension when Hock Tan joined Meta’s board in February 2024. Microsoft, meanwhile, is reportedly exploring collaboration on future chip designs.

For Anthropic, this order is about diversification, not exclusivity. The company keeps workloads running across Google TPUs, Amazon Trainium, and Nvidia GPUs — a multi-cloud approach that builds in flexibility and avoids over-reliance on any single supplier. Whether custom ASIC efficiency gains outweigh the flexibility trade-offs remains an open question, one that won’t have a clear answer until this infrastructure actually comes online.

Networking capability also factors into Broadcom’s competitive positioning. The company’s Tomahawk 6 switch pushes 102 terabits per second, which is a capability that’s currently unmatched in the market. With roughly $20 billion of backlog tied to networking and optical components, Broadcom’s value proposition reaches beyond chips to the connective tissue linking them together. As AI clusters grow larger and more distributed, that networking edge may end up mattering just as much as the compute itself.

What you need to know in 5 minutes

Join 37,000+ professionals receiving the AI Infrastructure Daily Newsletter

This field is for validation purposes and should be left unchanged.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More