From factory tokens to traffic tokens

Home RCR Wireless News From factory tokens to traffic tokens

Do we have something, here, more coordinated than just a burst of AI enthusiasm from AT&T – and all the rest of the telco pack, over the last several months? MWC has just been, and so too now has GTC, and just maybe a playbook is emerging from all of the noise about how telcos might reposition themselves in the AI value chain. AT&T’s latest work with Cisco and Nvidia is a signal about embedding AI inference into the network edge, and a shift from connectivity as transport to connectivity as execution – where networks don’t just move data, but act on it. 

 

Parallel alignments with AWS in the metro, Ericsson in the RAN, and Microsoft at the edge suggest something more deliberate: a disaggregated, multi-partner AI platform spanning the full network stack. At the same time, Nvidia’s push into AI-RAN with Nokia and T-Mobile extends the logic into the radio layer itself. The cell site, long treated as a cost centre, might become a distributed compute node – the logic goes; part of a wider “AI grid” designed to run inference closer to the action. Jury’s still out on AI-RAN, but it’s also not going away. 

 

You might remember RCR caught up with Verizon (Business) at MWC, which had lots to say about all of this. It brings clarity, and some constraint. Where the rest are sketching a vision of a distributed AI compute fabric – in press notes, anyway, until RCR speaks with them – Verizon talks about a three-tier system combining centralised training, metro/long-haul orchestration, edge inference. This is probably the cleanest articulation of the architecture. Its emphasis on programmable latency tiers (edge, metro, cloud) hints at an engineered hierarchy with trade-offs.

 

The message aligns with AT&T’s macro-edge strategy and Nvidia’s “AI grid” concept: telcos are the middle layer, linking training and inference, placed according to workload requirements, bandwidth permissions, and stuff like security and sovereignty. A deeper economic logic is emerging, too. Jensen Huang’s framing of AI around tokens as a unit of value – produced in data centres, consumed by applications; discussed here yesterday – implicitly introduces the notion of latency as a pricing variable. Not all tokens are equal: some are for real-time decisions.

 

So programmable latency tiers – as articulated directly by Verizon, and indirectly by AT&T – moves beyond engineering into economics, where each layer (attached to a relevant compute engine) maps to a different class of token. So, the network also shapes how and where tokens are delivered – and therefore how they are monetised. The architecture looks like a regular supply chain: data centers as token factories, edge environments as token dispensaries, and the network – particularly the metro layer – as the quality-of-service engine in between. 


Something like that, anyway. But the question is whether telcos can make a business out of it, by translating a functional role into a commercial one – by turning latency and determinism into billable traits, or whether they just carry increasingly valuable token flows without capturing much value. This is going to be the story from this point; same as it has been to this point. It is up to them to change the narrative. Credit to the US telcos, though; they are closer to it, but they also have a lead on it.

James-Newsletter-rh9kgqsb2fng5qdzlyiq8g8ux9jouqcjh68bxs270g-10-rjao7r1w4lop2svtzn812vsuqvbme6iqfireyhnqps

James Blackman
Executive Editor
RCR Wireless News

RCR Top Stories

Telco AI infra: AT&T, Cisco and Nvidia are working to integrate AI inference into telecom networks, enabling enterprises to process data at the edge and support real-time applications across industrial and connected device environments.

Nvidia in orbit: All about Nvidia’s new Space-1 module, its proposal for an orbital data center for LLMs: with 25 times the compute of the H100, Nvidia is looking to eliminate the latency and expense of downlinking raw datasets to ground stations.

Italy tower JV: TIM and Fastweb + Vodafone plan a joint venture to build up to 6,000 mobile towers in Italy, aiming to expand 5G coverage, improve efficiency, and support open-access infrastructure models.

IoT scaling myths: As cellular IoT deployments grow from thousands to millions, the limits of hardware, not software, come into focus, writes IoT connectivity provider Onomondo. Connectivity infrastructure must evolve to adapt, it says.

Token economics: The message from Nvidia chief Jensen Huang at GTC is that AI is no longer about models or chips, but about monetizing inference at scale – where tokens are the core unit of value and data centers are revenue factories.

Logos SMCI NVIDIA 2021 2400x700 1 1 1
In partnership with

AI-Powered Telecom Infrastructure
Supermicro, in collaboration with NVIDIA, delivers AI-powered infrastructure tailored for telcos, enhancing operational efficiency, network management, and customer experiences. Explore now 

Beyond the Headlines

Tech-co platforms: Red Hat says telcos are evolving into tech-driven platforms, powered by AI and automation. Common cloud foundations, digital sovereignty, and 6G revenue opportunities are accelerating the shift, it tells RCR Wireless.

Wi-Fi 8 for AI era: Wi‑Fi 8 shifts focus from raw speed to ultra reliability, says Qualcomm; it brings consistent, low‑latency, high-throughput performance for mission-critical AI applications like robotics, XR, and industrial automation.

Cyient at MWC: Cyient used MWC to promote a “human + AI” approach to autonomous networks, arguing that combining AI with human expertise will help telcos advance toward Level-4 autonomy and new revenue opportunities.

ZTE gets gongs: RCR sits down with ZTE at MWC to discuss its GLOMO wins, and explore its innovations, including for robotics and broadcasting, plus how AI and advanced RAN are shaping new use cases and business opportunities.

Wi-Fi 7, AI PCs: Intel says its next‑gen Wi‑Fi tech (Wi‑Fi 7 and beyond) will support richer connectivity and complement cellular 5G, especially for capacity‑heavy enterprise and AI‑driven applications.

What We're Reading

5G core surge: Omdia says 5G core spending surged 83% in Q4 as operators accelerate SA deployments, prioritising scalable, cloud-native architectures to enable new services, improve efficiency, and support growing network demands.

IBM spends $11bn: IBM has completed its $11 billion acquisition of Confluent, integrating real-time data streaming into its platform to power enterprise AI and autonomous agents, strengthening its hybrid cloud strategy.

Azure expansions: Microsoft has expanded its Azure AI infra and Foundry with deeper Nvidia integration, enabling scalable agentic and physical AI systems that connect real-time data, simulation and cloud platforms for enterprise deployment.

Supermicro adds: Supermicro has expanded its AI infra portfolio with new systems featuring Nvidia’s RTX PRO Blackwell Server Edition GPUs, enabling scalable AI  computing across data centres, edge deployments, and enterprise AI factories. 

Roaming Japan: Japan’s five mobile operators will launch a nationwide emergency roaming service from April 2026, enabling users to connect to other networks during outages or disasters to maintain essential communications.

Events

Virtual Program
Explore the technologies, tools, strategies and partnerships powering the next generation of intelligent systems. This is where the backbone of AI innovation takes center stage. Register now 
 
Wi-Fi Forum, January 20th 2026
Join this RCR Wireless News‘ event to understand the current state of the Wi-Fi as we examine a myriad of evolving use cases and monetization strategies being deployed by industry. Register now 

Industry Resources

Webinar, September 18th
The journey to a fully autonomous network – The evolution of network automation and how Amdocs is leading the way

What you need to know in 5 minutes

Join 37,000+ professionals receiving the AI Infrastructure Daily Newsletter

This field is for validation purposes and should be left unchanged.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More