Table of Contents
New 102.4 tbps Silicon One promises efficiency gains for hyperscalers
In sum – what we know:
- A new switch for AI – Cisco has unveiled the Silicon One G300, a 102.4 Tbps switching chip purpose-built for AI networking.
- High-speed connectivity – The G300 features in-house 224 Gbps SerDes, and support for up to 512 ports.
- Intense competition – The chip goes head-to-head with Broadcom’s Tomahawk 6 and Nvidia’s switch silicon.
It’s easy to focus on the GPU arms race when talking about AI infrastructure, but the networking connecting all those accelerators is arguably almost as important. You can rack up as many GPUs as your budget allows — but if the data can’t shuttle between them fast enough, you won’t be able to take full advantage of them. Cisco thinks the Silicon One G300 solves that problem. The new chip is designed specifically for AI, offering 1.6 Tbps per port across 64 ports. It’s set to power the company’s new N9000 and 8000 series systems when it ships later in 2026.
Hyperscalers, neoclouds, and sovereign cloud operators are all scrambling to stand up AI infrastructure at massive scale, and they need networking silicon that doesn’t become the weak link behind billions of dollars in GPU investment. Cisco says design work with customers is already underway. That said, Cisco is making bold claims in a space where Broadcom and Nvidia have a clear advantage.
What can it do?
The standout spec is support for up to 512 ports in high radix configurations, letting operators wire far more GPUs together at close range within a flatter network topology. Fewer hops between compute nodes means lower latency, and for AI workloads where GPUs are constantly synchronizing, that physical and logical proximity is important. The chip hits its 1.6 Tbps per port figure using an integrated 224 Gbps SerDes that Cisco designed in-house.
Underneath, Cisco has baked in what it’s calling “Intelligent Collective Networking.” The fully shared packet buffer is engineered to soak up the bursty traffic that’s common in AI workloads. When packets drop during those bursts, the whole job stalls waiting on retransmission, and expensive GPU cycles go to waste. Path-based load balancing pushes this further by responding faster to link failures, dynamically rerouting traffic instead of letting congestion pile up on degraded paths.
Cisco’s own testing points to a 33% bump in network utilization and 28% faster job completion versus non-optimized setups. If those numbers hold outside the lab, that’s significant. CDW highlighted the “programmability and telemetry to optimize every flow” and how the G300 enhances this with “industry-leading buffers, power efficiency, and 1.6T port density.” Of course, controlled benchmarks and messy production data centers are very different things, and customers will want to prove this out themselves.
Power and sustainability
Power is arguably the single biggest operational headache for hyperscale data center operators right now, and it’s getting worse as AI infrastructure keeps expanding. Cisco is going after this directly — the new G300-based systems feature 100% liquid cooling, which the company says delivers nearly 70% better energy efficiency compared to air-cooled equivalents. To be specific, Cisco claims these systems are 70% more efficient than achieving the same bandwidth today using six 51.2T air-cooled systems.
On top of the chip itself, Cisco is introducing new 800G Linear Pluggable Optics that slash module power consumption by 50% versus retimed optical modules. Pair that with liquid cooling, and total switch power draw can fall by up to 30%. But liquid cooling isn’t without trade-offs — it brings infrastructure complexity, and not every facility is set up for it. Moving from air-cooled to liquid-cooled represents a meaningful capital investment on its own, which could slow adoption for some operators even when the long-term economics clearly favor the switch.
The competition
The G300 drops right into the path of two entrenched competitors — Broadcom’s Tomahawk 6 and Nvidia’s switch silicon. Broadcom has owned the merchant switching silicon market for years, while Nvidia has been pushing hard into networking from its GPU base, especially since buying up Mellanox. Cisco is positioning the G300 not just for its own systems but as merchant silicon available to other equipment manufacturers — a direct challenge to Broadcom. Whether OEMs will actually adopt Cisco as a silicon supplier when Cisco simultaneously competes with them on the systems side is a fair question, but the intent is clear.
The chip will power Cisco’s N9000 and 8000 series platforms, aimed at everyone from hyperscalers to enterprises. Cisco is wrapping this launch in the narrative of what it calls the “agentic era” — a future where autonomous AI agents tackle complex tasks in real time and need deterministic, low-latency networking to operate reliably.