Table of Contents
A deal with Marvell could mark a big expansion for Google’s custom AI chip efforts
In sum – what we know:
- A strategic diversification – Google is looking to add Marvell as another major custom chip partner to design specialized hardware that complements its current Tensor Processing Units.
- The inference priority – New chip designs focus on inference and memory processing to reduce the massive operational costs of serving AI models across Google’s consumer and enterprise products.
- Marvell’s market momentum – Following record data center revenues and a recent $2 billion investment from Nvidia, Marvell is cementing its role as a critical designer for the world’s largest cloud providers.
Google is reportedly in active talks with Marvell Technology about building two new custom AI chips, including a memory processing unit designed to complement Google’s existing Tensor Processing Units, and a dedicated inference TPU. Marvell’s role here would be design services, similar to what MediaTek handles for Google’s Ironwood TPU today.
It’s worth noting that nothing is official just yet, and even when the deal is made and announced, it may be some time before any co-developed chips enter production. If, however, a deal is made, Marvell would become Google’s third custom chip design partner, alongside Broadcom and MediaTek, with TSMC handling fabrication. In other words, while the deal might mean Google reduces its dependance on other companies, it’s entirely possible the deal would be more about expanding its supplier deals rather than replacing them. Also, Broadcom just locked in an extended agreement with Google covering TPU and networking development through 2031, further highlighting that Google is looking to diversify here.
Shifting priorities to inference infrastructure
Google’s custom silicon ambitions are increasingly pointed squarely at inference. Google is starting to use AI models for all kinds of different aspects of its business, including powering Search, Gemini, and a rapidly growing portfolio of consumer and enterprise products. Because of that, inference is becoming the dominant infrastructure cost by a wide margin. Training a frontier model is expensive, sure, but serving that model continuously to hundreds of millions of users is where the real spend lives.
Google plans to manufacture millions of Ironwood TPU units in 2026. Marvell-designed chips would supplement that rather than replacing it, potentially targeting different workload profiles or hitting cost points that make better economic sense for specific use cases.
The custom ASIC market is maturing fast around inference workloads, with projections pointing to 45% growth in 2026 and a market worth an estimated $118 billion by 2033. Every major hyperscaler is pouring resources into custom silicon to reduce Nvidia dependence and get better cost control, and inference optimization sits right at the center of that push.
Marvell’s growing data center presence
Google and Marvell actually have a fair bit of shared history. Marvell was reportedly working on a project code-named “Maple,” and said to be related to Google’s Axion ARM CPU efforts. And, back in 2022 they collaborated on an internal project codenamed “Granite Redux” that aimed to use Marvell in place of Broadcom for certain workloads. That earlier effort didn’t spark any public drama — Google described Broadcom at the time as “an excellent partner” and said it was “productively engaged with Broadcom and multiple other suppliers for the long term.”
Marvell’s data center business, meanwhile, has been on a tear. The company posted $6.1 billion in record data center revenue for the fiscal year ending February 2026, part of $8.2 billion in total revenue — a 42% year-over-year jump. It now has 18 cloud-provider design wins, with hardware partnerships spanning Amazon (Trainium processors), Microsoft (the Maia AI accelerator), and Meta (a new data processing unit). Its custom silicon business alone is running at a $1.5 billion annual rate.
Two recent moves have raised Marvell’s profile even further. In December 2025, the company acquired Celestial AI for up to $5.5 billion, picking up photonic interconnect technology that could prove critical for the connectivity demands of large-scale AI clusters. Then in late March 2026, Nvidia made a $2 billion investment in Marvell — positioning it at the intersection of the GPU and custom ASIC ecosystems and opening the door for integration with Nvidia’s NVLink Fusion interconnect fabric.
Landing a Google inference TPU contract would be another big win for Marvell and would arguably cement its position as one of the more important custom AI chip designer globally. But the deal isn’t done, and the competitive landscape is currently complex. Broadcom remains deeply embedded with Google, and MediaTek has demonstrated clear value on cost-optimized variants.