Table of Contents
New Broadcom agreements secure long-term TPU supply and 3.5 gigawatts of future capacity for Anthropic
In sum – what we know:
- Massive capacity expansion – Anthropic has secured 3.5 gigawatts of next-gen TPU compute capacity starting in 2027, building on its existing multi-cloud strategy.
- Long-term silicon stability – Broadcom will design and supply Google’s custom Tensor Processing Units and networking components through at least 2031.
- Broadcom’s infrastructure leverage – By powering both Google and OpenAI hardware, Broadcom is projected to see AI revenues hit $42 billion from Anthropic alone by 2027.
The Broadcom-Google-Anthropic love triangle continues. The three companies have announced a number of major new deals, largely focused on Broadcom. Under the first agreement, Broadcom will design and supply Google’s custom Tensor Processing Units through 2031. The second deal locks in networking components and other hardware for Google’s next-gen AI racks over the same period.
Then there’s the third agreement, which arguably overshadows the other two. Anthropic is getting access to roughly 3.5 gigawatts of next-generation TPU-based AI compute capacity starting in 2027.
Anthropic’s compute expansion
The announcement builds on an October 2025 deal between Anthropic and Google for over 1 gigawatt of capacity that’s already spinning up in 2026. Anthropic called this expansion its “most significant compute commitment to date,” emphasizing that most of the new infrastructure will sit on U.S. soil. The company positioned the deal as a continuation of its November 2025 pledge to pour $50 billion into American computing infrastructure.
Anthropic’s run-rate revenue has blown past $30 billion — a dramatic jump from roughly $9 billion at the close of 2025. The enterprise side is driving a lot of that. More than 1,000 business customers are now spending over $1 million annually, double the count the company reported in February 2026. On the funding side, Anthropic recently wrapped a $30 billion Series G at a $380 billion valuation. Claude, its flagship AI model family, has been the engine behind much of this growth.
There’s a catch, though. Broadcom’s own filing flagged that Anthropic’s consumption of the expanded capacity is “dependent on Anthropic’s continued commercial success” and contingent on “continued commercial performance.” The fact that Broadcom felt it necessary to spell this out for regulators highlights that committing to 3.5 gigawatts of compute is fundamentally a bet that demand stays enormous for years. The U.S. Defense Department separately labeling Anthropic a supply-chain risk doesn’t exactly simplify the picture either.
Growing competition
None of this replaces Anthropic’s existing cloud relationships. Amazon Web Services is still Anthropic’s primary cloud and training partner through Project Rainier, the Trainium 2-powered supercluster in Indiana. The Google-Broadcom capacity is purely additive. Rather than going all-in on a single provider, Anthropic is deliberately building out a multi-cloud, multi-chip strategy. The company continues running Nvidia hardware alongside TPUs, matching workloads to whichever silicon fits best.
From Google’s perspective, this is part of a much larger play to establish its custom TPUs as a real alternative to Nvidia across the booming AI infrastructure market. Nvidia still owns AI training and inference hardware by a wide margin, but Google’s willingness to develop its own silicon, and now sell access to that silicon to outside customers like Anthropic, represents either a genuine competitive threat or, at minimum, a meaningful diversification path for AI companies uncomfortable depending on a single supplier.
Broadcom, meanwhile, is turning into the connective tissue running through multiple frontier AI efforts. CEO Hock Tan has projected north of $100 billion in AI chip revenue for 2027 alone. Mizuho analysts estimate Broadcom will pull in $21 billion in AI revenue from Anthropic in 2026, climbing to $42 billion in 2027. Broadcom has a separate $10 billion custom silicon program with OpenAI, part of a 10-gigawatt co-development effort announced in October 2025.
Is Broadcom the biggest player in AI?
Broadcom may not be training models or building clouds, but it is designing and manufacturing the custom silicon inside Google’s TPUs, supplying the networking fabric that ties AI racks together at scale, and doing comparable work for OpenAI. When two of the three leading frontier labs rely on your hardware to train and serve their most advanced systems, that’s extraordinary leverage — the kind of position that reshapes who actually holds power in the AI stack.
Important caveats apply, of course. Nvidia remains the dominant force in AI chips across revenue, market share, and ecosystem depth. Its CUDA software stack alone creates a moat that custom silicon hasn’t come close to matching for general-purpose workloads. Broadcom’s custom chip business also depends on a small number of massive customers, which introduces concentration risk that a more diversified supplier simply doesn’t face. If Anthropic’s commercial performance stumbles, or Google pivots its chip strategy, those analyst revenue projections could deflate.
Even so, the sheer scale of what’s been committed here is difficult to wave away. Broadcom has quietly built itself into a foundational layer of the AI infrastructure stack, and the new deals make that role more visible and more consequential.