Table of Contents
The deal focuses on Xeon processors and custom IPUs
In sum – what we know:
- A focus on balance – Intel and Google are prioritizing “balanced AI infrastructure,” moving away from a GPU-only focus to include CPUs and IPUs for better system orchestration.
- Custom hardware co-development – The deal extends the 2021 collaboration on custom ASIC-based infrastructure processing units (IPUs) to offload networking and storage tasks from the main CPU.
- Strategic market defense – This multi-year commitment helps Intel maintain its data center footprint against a growing CPU shortage and new competition from Arm’s self-developed chips.
Intel and Google have announced an expansion of their deal for AI infrastructure, in the form of a new multi-year deal that will see the two collaborate on new AI and cloud infrastructure. The deal puts a spotlight on the role CPUs and custom infrastructure processing units play in scaling modern AI systems — the future of AI isn’t just about stacking GPUs. Intel didn’t share any pricing details.
At the heart of this is what both companies are calling a shift toward balanced AI infrastructure. The basic idea is that accelerators alone aren’t enough for AI. Modern AI workloads lean heavily on CPUs for orchestration and data processing, and they depend on purpose-built infrastructure accelerators to handle the networking and storage plumbing running underneath everything. AI at scale requires far more than raw training horsepower.
Xeon processors and infrastructure processing units
Google Cloud will be running Intel’s latest Xeon 6 processors to power its C4 and N4 instances. This isn’t a new relationship — Google has been building on Intel’s Xeon line for decades. But the deal locks the two together across multiple future chip generations, and through the next few years of AI deployment. Where Xeon fits in this collaboration is around large-scale AI training coordination, latency-sensitive inference, and general-purpose computing. In other words, the connective tissue that keeps massive AI deployments from falling apart.
The partnership goes beyond CPUs, too. It also expands co-development of custom ASIC-based infrastructure processing units, or IPUs. Intel and Google have been working together on chip development since 2021, so there’s history here. IPUs are programmable accelerators built to offload networking, storage, and security tasks from host CPUs.
“AI is reshaping how infrastructure is built and scaled. Scaling AI requires more than accelerators — it requires balanced systems,” said Intel CEO Lip-Bu Tan, in a statement. “CPUs and IPUs are central to delivering the performance, efficiency and flexibility modern AI workloads demand.”
By taking on infrastructure work that would otherwise eat into CPU cycles, they free up more effective compute capacity. For a hyperscaler like Google, that translates directly into better utilization, improved efficiency, and more predictable performance across the board. Xeon CPUs paired with IPUs create an integrated platform where general-purpose compute and purpose-built infrastructure acceleration work in concert.
The competition
This deal lands at a moment when the semiconductor industry is running into a growing shortage of CPUs — the chips needed not just for running models but for managing the broader AI infrastructure around them. GPUs still get most of the attention because they’re the workhorses behind model development and training, but the less glamorous CPU side of the equation has been under real strain. Across the industry, companies have been turning increased attention to CPUs as AI deployments scale and the demand for orchestration, inference, and system-level management keeps climbing.
“CPUs and infrastructure acceleration remain a cornerstone of AI systems—from training orchestration to inference and deployment,” said Amin Vahdat, Google SVP and Chief Technologist for AI Infrastructure. “Intel has been a trusted partner for nearly two decades, and their Xeon roadmap gives us confidence that we can continue to meet the growing performance and efficiency demands of our workloads.”
The competitive picture is shifting too. Arm recently unveiled its first self-developed chip, the Arm AGI CPU, which introduces direct competition right in the middle of a worldwide CPU crunch. That puts additional pressure on Intel, which has dominated the data center CPU market for years but now faces Arm-based architectures steadily gaining ground on power efficiency and performance-per-watt. Locking in a deep, multi-year commitment from a customer the size of Google is a meaningful vote of confidence for Intel’s roadmap. That said, it’s worth remembering that Google also designs its own custom TPUs — they’re clearly not putting all their eggs in one basket.