Table of Contents
As hyperscalers and sovereign cloud providers scale up massive AI clusters, the real point of differentiation is becoming how efficiently those GPUs can communicate, according to Dell’Oro
In sum – what to know:
AI clusters are outgrowing electrical networks – Explosive GPU growth and rapid port-speed transitions are stressing traditional switching layers and accelerating upgrade cycles across back-end AI fabrics.
Speed-agnostic, low-power interconnects – Optical paths cut latency, eliminate O-E-O conversions, and avoid repeated hardware refreshes as the industry moves from 800 G to 1.6 T and 3.2 T.
A proven technology returns to relevance – Long used in carrier WANs, OCS is now positioned as a mature, scalable architecture for next-generation AI data centers.
The race to build next-generation AI infrastructure is shifting from GPUs themselves to the networks that connect them, according to a new report from Dell’Oro Group.
As hyperscalers and sovereign cloud providers scale up massive AI clusters, the real point of differentiation is becoming how efficiently those GPUs can communicate — pushing emerging optical technologies back into the spotlight.
As clusters expand, the network becomes the bottleneck. The number of interconnects grows exponentially with GPU counts, driving up cost, energy use, and latency. Back-end AI networks now refresh on a roughly two-year cycle — far faster than traditional enterprise refresh timelines — because port speeds are climbing rapidly. Dell’Oro expects 800 Gbps ports to dominate in 2025, moving to 1.6 Tbps by 2027 and 3.2 Tbps by 2030, requiring continual replacement of electrical switching layers.
In this context, Optical Circuit Switches (OCS), also known historically as Optical Cross-Connects (OXC), are gaining fresh attention, according to Dell’Oro. These devices create direct optical paths between endpoints, eliminating packet-switching overhead and bypassing optical-electrical-optical conversions. The result is extremely low latency, high bandwidth efficiency, and significantly reduced power consumption.
Dell’Oro highlighted that Google was the first major cloud provider to deploy OCS at scale nearly a decade ago, using it to dynamically rewire topologies and ease pressure on electrical fabrics. Because OCS operates entirely in the optical domain, it is inherently speed-agnostic — unlike traditional switches that must be refreshed each time the industry steps up to a faster data rate.
Dell’Oro also noted that OCS technology has been widely used in carrier wide-area networks for more than a decade, leveraging proven MEMS and LCOS optical components. Its long field history under demanding conditions makes it a credible candidate for the rapidly evolving requirements of AI back-end fabrics, it added.
As AI data centers scale beyond the limits of electrical switching, vendors have introduced new OCS systems specifically designed for AI and HPC environments, according to Dell’Oro.
“We expect hyperscalers to adopt OCS/OXC first because they have massive, centralized scale and the engineering depth to integrate new optical architectures. Sovereign clouds will follow later, adopting these technologies more slowly due to smaller scale, and limited in-house optical engineering resource,” Sameh Boujelbene, vice president at Dell’Oro Group, told RCR Wireless News.
The executive also noted that OCS may modestly ease power-availability constraints, but GPUs remain the dominant driver of overall power demand. “Its primary benefit is improving cluster utilization and training performance by reducing network latency,” she added.
The report concludes that with AI architectures diverging rapidly from traditional data center design, network innovation must accelerate even faster. OCS, Dell’Oro argues, is not experimental — it is a mature, field-proven option that merits serious consideration as operators build the next generation of AI clusters.