AI scaling drives new pressure across transport networks

Home AI Infrastructure News AI scaling drives new pressure across transport networks
AI

The rapid deployment of AI compute infrastructure is not always matched by network readiness, according to Ciena

In sum – what to know:

Network pressure rises – AI demand is driving bandwidth needs across metro, long-haul, and subsea networks, with capacity constraints emerging across all layers.

Compute-first gap – Neocloud providers prioritize GPU deployment, often leaving network infrastructure to catch up in early stages of scaling.

Capacity efficiency critical – Limited fiber ownership is pushing demand for higher-capacity optics and more efficient use of existing infrastructure.

The rapid expansion of AI infrastructure is placing growing pressure across all layers of transport networks, from metro and long-haul links to subsea capacity, as operators race to keep up with demand.

According to Mark Bieberich, vice president of portfolio marketing at Ciena, the impact is already visible across the network stack. “We actually see bandwidth demand in all parts of the network right now, whether it’s metro… regional or long-haul… and of course, in submarine.”

The scale of this demand varies depending on the type of operator. Some neocloud providers operate within a single metro footprint, while others are building global networks. “Some of our large neoscaler customers… see demand on submarine links, on long-haul links, in addition to the metro links,” he told RCRTech.

At the same time, the rapid deployment of AI compute infrastructure is not always matched by network readiness. “Most of our neoscaler customers are laser focused on getting their compute infrastructure in place… and many times the network actually follows.”

This lag reflects the early stage of the market, but it is expected to evolve. “Over time… you’re going to see a stronger bond between the compute side and the network side,” Bieberich said, as operators align infrastructure more closely with GPU deployments.

Another key constraint is fiber availability. Many neocloud providers operate without extensive fiber assets, increasing the importance of maximizing capacity. “Most of the neoscalers don’t have a lot of fiber… they need to maximize the capacity they have on the fiber that they’ve acquired.”

This is driving demand for higher-capacity optical technologies. “We have now the world’s leading coherent optical technology available now at 1.6 terabits per second,” Bieberich said, highlighting the importance of scaling bandwidth efficiently.

Beyond technology, operational support is becoming a critical differentiator. Vendors are increasingly expected to help design, deploy, and manage networks as neocloud providers scale. “From the design of the network to operating the network… that level of expertise is unmatched.”

As AI infrastructure continues to expand, the challenge is no longer limited to adding compute. It now extends across the entire transport network, requiring coordinated scaling of capacity, architecture, and operations.

What you need to know in 5 minutes

Join 37,000+ professionals receiving the AI Infrastructure Daily Newsletter

This field is for validation purposes and should be left unchanged.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More