Telco AI: Pipes to platforms
Sponsored by
NOTE FROM THE EDITOR

James Blackman
August 12, 2025
Telcos are taking a focused, low-risk approach to AI by prioritizing deployments in OSS/BSS and enhancing data correlation through generative models, while maintaining a clear line of sight on ROI. Hybrid AI architectures are emerging as the preferred model, blending large and small models across core and edge infrastructure to balance power and efficiency. While generative AI is rapidly evolving, truly autonomous agentic AI remains out of reach, leaving telcos to focus on more immediate challenges. These include: how to optimize their networks and assets for AI, how to define new roles in the AI ecosystem, and how to rationalize cost and benefit (and demand) in the AI supply chain. Lots to consider, then – as below.
Three takeaways:
1 | Pragmatic progress
Telcos are focused on AI in OSS/BSS for fast, low-risk returns – especially with generative AI, which is already delivering strong results in data correlation and contextualization. Adoption is deliberate, ROI-driven, and risk-aware – especially when it comes to core infrastructure.
2 | People in the loop, AI at the edge
Human oversight remains critical – for model governance, regulatory compliance, and organisational change. Hybrid AI is driving innovation: big models for broad understanding, small models for specific tasks. Leaner AI is heading to the edge – and potentially back into telco hands.
3 | Bigger ambition, bigger questions
AI is reshaping network and infrastructure strategy. Telcos see opportunity in becoming AI infrastructure providers – but need better fiber, better 5G, and clearer monetization. Trust, reputation, and talent will be as important as technology in determining who wins.

Making AI for telecom real: Perspectives from Google Cloud Next

Kelly Hill
April 9, 2025
Google Cloud Next shows how network operators and vendors are using AI for telecom
We already know that businesses are pushing hard to figure out how artificial intelligence can benefit them. In telecom specifically, an Nvidia survey earlier this year found that 37% of respondents said their companies were investing in AI for telecom to improve network planning and operations, and 33% said that there had already been investments in AI for telecom field operations.
At this week’s Google Cloud Next event, some of Google Cloud’s specific work on advancing the use of AI for telecom is being highlighted, as well as AI for enterprises in other verticals. Here are three perspectives on how companies are leveraging Google Cloud AI for telecom.
–Verizon has integrated Google Cloud’s generative AI into customer operations, which the two companies said has led to “95% comprehensive answerability for customer inquiries” and “demonstrably more efficient and effective customer care interactions.” Verizon is making use of a Google Cloud conversational AI agent that leverages Google Cloud’s Vertex AI, Gemini models and Agent Assist Panel in order to give its customer service reps “real-time, context-aware, and personalized answers” to customer questions, as opposed to the reps having to do manual searches of knowledge bases.
Verizon has deployed this across 28,000 customer care reps and retail stores. Additional features like automated summaries of conversations, and automated follow-up action reminders, are also in the process of being rolled out. Those internal AI agents are in addition to several genAI tools that are directly customer-facing and use Google’s Gemini models for virtual assistants which use natural language.
“Our collaboration with Google Cloud and the integration of Gemini into our customer care platforms mark a significant advancement in our commitment to providing exceptional customer experiences,” said Sampath Sowmyanarayan, CEO of Verizon Consumer. “The tangible results demonstrate the power of AI to enhance efficiency and empower our customer care teams.”
“The impact of Gemini on Verizon’s customer service operations is a testament to our deep partnership and Verizon’s commitment to continued innovation,” said Thomas Kurian, CEO of Google Cloud. “These results demonstrate the potential of AI to not just improve operations, but to create more meaningful and helpful interactions for customers everywhere, ultimately driving significant value for businesses.”
-“We’re really trying hard to build an autonomous network which is going to be obviously driven by AI more and more over time,” said David Sauvageau, director of software development, data and AI for Bell Canada, in a briefing call prior to Google Cloud Next.
In general, he continued, the network operator see AI as a “fundamental building block for the future of telecom, for innovation, for customer operations and for network operations.”
He said in the short-term, Bell Canada is prioritizing AI applications for telecom which “really focus on optimizing our customer experience and network ops.” For the latter in particular, Sauvageau said, “Our goal is really to leverage all the data that we have available in AI in order to predict and prevent issues before they actually impact our customers. So, really moving from reactive to preventive customer experience.”
Additionally, Sauvageau said, that Bell Canada believes AI is crucial for managing the increased complexity of telecom networks. “We need to be much more efficient, [and have] have faster innovations in areas like edge computing, digital twin, 6G or any other future network technologies. And ultimately … our goal is to build and accelerate the development of our AI-powered autonomous networks.”
-Razvan Teslaru, VP and head of strategy and portfolio for cloud software and services at Ericsson, said that the use of AI for telecom encompasses both increased efficiency and increased monetization—the latter being magic words for the telecom industry right now.
Autonomy is both about enabling differentiated connectivity at scale, he said, and monetization of that differentiated connectivity. At this year’s Mobile World Congress Barcelona, one of the things Ericsson spotlights was how it integrates its service orchestration with Google Cloud and its Vertex AI, Teslaru noted.
It’s now possible, he said, for a user to express their intent in natural language: A request to configure a specific type of connectivity to, say, a hospital, over a dedicated slice. “You can have Gemini interpret that intent and break it into service orders. … Then AI agents can take over these service orders and take them into execution, and also observe the results, so we can take corrective action autonomously,” he said.
So ultimately, then, network autonomy is about delivering personalized (or differentiated) connectivity with efficiency, while also doing so at a scale that means more effective and large-scale monetization.
Google Cloud emphasizes AI agents for telecom
Erwan Menard, director (outbound) of product management for cloud AI at Google Cloud, said that “When we look ahead, we envision networks that can largely manage themselves through the use of intelligent software agents.” He continued: “Think of these agents as digital experts that can automatically handle tasks like network configuration, problem solving and resource allocation.This would free up telcos from any routine operational burden, allowing human experts to focus on innovation and new services, ultimately leading to greater efficiency and productivity in network operations.”
In particular, he emphasized, telecoms face the unique challenge of having a “mission-critical duty to maintain knowledge over technologies which could be in service for decades. So the notion of being able to tap accurately through the knowledge of the company to help onboard a new resource or to assist somebody who has a decision to make—technical or business—is quite an important challenge in the telecom industry.” Google Cloud’s Google Agentspace was designed for such knowledge-heavy industries, Menard continued, to retrieve information across multiple systems with the idea of using “agentic workflows to be able to invest a new way of work.”
This week, the company is spotlighting its a new Agent Gallery, so that employees across the telecom organization can get personalized suggestions on which AI agents would be useful to them—either out-of-the-box from Google Cloud, agents built by the company or third-party AI agents. Google Cloud is also offering the ability to design and build agents with a no-code interface, in which people can essentially walk through a series of steps and “convert that sequence into an agent that then becomes your personalized assistant,” Menard explained.
Sponsored by

Note to telcos: sort your data (how to get to level-4 autonomy and agentic AI)

James Blackman
May 8, 2025
In sum – what to know:
Clean data drives AI intent – by breaking telcos’ silos, creating global visibility, and enabling contextual models; and by providing a unified foundation for AI models to understand their desired outcomes.
AI intent drives automation – by shifting from manual telco scripts to declarative outcomes across the network; and by translating their high-level goals into self-directed actions across network layers.
AI automation drives growth – by raising telcos’ service delivery, customer experience, and order volumes; and by and by driving their innovation, service offerings, and measurable business impact.
It is shameful, perhaps, but a new RCR study about telco AI (Using AI and Supporting AI – out next week) makes a footnote, effectively, of the crucial point that clean data is everything. It is made at the death; an obvious statement as a final reminder, but one that also gets forgotten in the excitement and confusion. Data (and people) have to be organised first – the message goes. Because in the end, after all the strategic knowhow and guesswork – about applications and architectures, supply and demand, sure-bets and wild things – the killer for AI is the data.
If it’s right, the AI works, and business transforms; if it’s wrong; it doesn’t, and a whole lot of money is wasted. And here endeth the lesson – type of thing. A quote follows in the report from Nelson Englert-Yang, an analyst at ABI Research. “That is one of the most central components of this entire discussion – about where telcos get their data and how they organize it,” he says. “Because telco data is messy and it is useless if it’s messy. So telcos need robust processes for gathering it, cleaning it, and then training it for it to be useful.”
And that’s about the sum of it in the report – regrettably or not, because the narrative flow takes its own course. But yesterday (May 7) at FutureNet World in London, Blue Planet, the digital-change division of US fibre outfit Ciena, took to the stage to say the same, and expand on the point in forceful fashion. If telcos want to get to high level-four (‘Level 4’) autonomy in their networks (meaning: pervasive AI, minimal human intervention), as defined by TM Forum and adopted by analyst firms, then they need to sort out their data, it said – first and fast.
Everything else follows from there: breaking data silos, orchestrating data flows, building digital twins, and unleashing intent-based data networks – where complexity is reduced, teams are unburdened, and some kind of autonomy is enabled. Kailem Anderson, vice president for portfolio and engineering at the firm, said: “The key… is to bring intent and declarative models with clean and structured data and [clean and structured] data models. [Because] then you have a foundation to apply these cool agentic and generative AI use cases.”
In other words, if your data is ‘clean’ – where it is accurate, consistent, and formatted – then your data can be trusted, structured, and shared, and your data models can be logical, contextual, and stateful. And then telcos can do clever things – and maybe make mad-cap business dreams come true. “When you have a data model that is context aware, topology aware, stateful, and relationship-aware, then you can untap the power of generative [and agentic] AI,” said Anderson. Again, it is 101-stuff, but it was neatly told on stage at FutureNet World.
He zoomed-out to explain the role of declarative intent-based networking in level-4 autonomous networks – where you declare the outcome, and set the intent, rather than specify how (the technical steps) to do it. As it stands, most networks rely on imperative automation – manual scripts, integrations, and oversight. “Let’s be honest,” he said. “Intent-based models are foundational. Modern DevOps is built on it. Cloud infrastructure management has inherent intent-based and declarative models as a part of it. We just need to apply it to the network.”
He went on: “The long descriptive scripts our industry is built on for automation, which may have been successful in the past, will be unscalable as we move to level-four autonomous operations. Intent must be captured from the users and actuated across all layers in the network business. It needs to be reconciled to ensure the desired state matches the operational state. From a process standpoint, it means the industry has to make profound change – which is driven through intent, and not manually in the network. This is going to be very important to achieve our goals.”
And from there, Anderson presented a case study of sorts with an unnamed cable operator in North America, which has followed the Blue Planet script – to clean and sort data, break silos and cross sources, build a data fabric, create contextual models, feed automation models, and so on. The upshot is that its client has reduced its “order-to-cash cycle” (service design and provisioning) from around 45 days to a couple of days, and also seen its order volumes for optical and ethernet services jump by 300 percent and 500 percent, respectively. Or so the story goes.
But Anderson can tell it; the full transcript of this back-end part of his presentation is copied below.
“Let me give an example of the benefits of bringing data-intent and AI together: a North American cable operator, which, like most cable operators in North America, has grown over 20 years by acquiring various assets in different markets – to give it footprints in strategic business areas. It has a patchwork of OSS and BSS systems supported in each market. It has data silos in each market. It has data spending, planning, orchestration, insurance that are generally not talking – and which it finds very difficult to stitch together to have a global view of things.
“What’s the implication? Very simply, this cable operator had design times of roughly 30 days and provision cycles of roughly 15 days – so order-to-cash cycles in excess of 45 days. It understood if it was going to be competitive, given the market pressure and dynamics we are seeing, that it needed to change all of this. It needed to break down these silos to get a global view and to weaponize its data. It adopted a mantra: automate or die. It understood the profound importance of automation to its business to drive out costs and deliver a better experience.
“So what did it do? It broke down those silos. It implemented a service inventory layer to pull information up from these data silos to have a global view. Once it had that global view, it had a foundation to reconcile what was in the network. So it discovered what was in the network across all its markets, and then reconciled between the planned state and the operational state, and then fed the active operational data back to the systems that needed it – the planning systems, automation systems, services assurance systems.
“What did this do? Because it had stateful data about what was going on in each of its markets, it was able to do very simple things like visibility checks. It was able to start doing pre-reservation on what was in the network – which was a foundation to offer bandwidth-on-demand type services. It then used this data to drive value-added use cases where it started linking operational data to its services assurance systems so, when it had alarms, it could understand the impact on the service path. Because it knew what was in the network, it was able to start doing active testing in the network.
“It started to shift its business from being reactive to proactive. It then kicked off a secondary transformation. Because it understood what was in its network, which moved the needle in terms of the services it offered, it introduced a services orchestration layer to start to feed intent from customers and move from an imperative-based model to a declarative-based model. It did a rip-and-replace of its services assurance system… to leverage the [live, total, consistent, clean] data to do predictive analytics on fault-based use cases to identify failures in the network before they happened. And then it started to apply that to its performance data so it could predict performance trends and apply policy back into the network to do closed-loop actions – basically getting it to level-three autonomy.
“All of this had a profound impact across its business. Design times went from 30 days to 20 hours for its optical services. Order volumes went up 300 percent. Provisioning times went from 12 days to 19 hours. Order volumes for its ethernet services skyrocketed 500 percent. Because it was able to package its services differently.
“What is the next step? It wants to take what it has done for the network, automating layer zero through to layer three, and apply that to business services. It wants to stitch its business services layer with its underlay so it can do multilayer automation – AKA, truly getting to level-four autonomous operations.
“Level-four autonomous networks are achievable one step at a time. There are a few key considerations. The first is the role of a consistent data fabric layer to break down data silos. Consistent data across all domains and functions is key. It provides a foundation to stitch together planning, fulfillment, and assurance functions. Once you have that, you have the basics to deliver an AI-enabled OSS and introduce intent and agentic AI into your network… [And thereby] save time, get to market quickly, introduce new services, make customers happier, and increase the number of services you can deliver each month – all while reducing op/ex.”
Sponsored by

The critical value gap – AI, 5G, and the long road to reinvent telcos

James Blackman
May 7, 2025
In sum – what to know
Outward reinvention – top telcos discuss the task to go from rigid inward-focused network operations to agile outward-focused service supply – as prompted and hastened by AI.
Differentiated services – while the tech (5G SA) is mostly in place, there remain questions for the sector about resiliency and value – and also just about demand.
Industrial drive – the value of connectivity for critical industry is finally recognized, but telcos still need to build trust and ensure quality – and command their own narrative.
Whether by using AI or by supporting it, FutureNet World in London (May 7-8) is about the same stuff as every other telco event at the moment: telcos talking about telcos in pursuit of their own reinvention. Or digital transformation or industrial change, or network as-a-service, or platform as-a-service – whatever you call it, as now prompted and hastened by AI. Which is not to trivialise the importance or urgency of the task; as Colin Bannon, chief technology officer at BT Business, put it in a pe-lunch panel, it is about “a past that needs to be changed”.
The discipline is to break the industry’s “tightly-integrated vertical stovepipe portfolios”, traditionally “glued together to create solutions”, into “more horizontal capabilities” to engender the kind of service flexibility that allows telcos to go at the same pace as the rest of the digital economy – which has grown-up in its wake. As Laurent Leboucher, group chief technology officer at Orange said on the same panel, it is about how telcos change from being “inside-out to being outside-in” – which, at least in terms of differentiated quality-of-service, is possible “only with AI”, he said.
Both Bannon and Leboucher were on an early mid-morning session, as well, which asked just this: whether the original promise of 5G, to go “beyond connectivity” to deliver differentiated network services, particularly for enterprises, has been realised. Are we there yet? This was the question from the panel chair, Peter Jarich, head of GSMA Intelligence at the GSMA. Leboucher responded: “We’re not there. Definitely [not]. If you look at different approaches around the world, very few are really monetizing connectivity in a way that is differentiated. There are a few, but very few.” Telcos have to solve new technical challenges from new traffic demands, he suggested.
“A very significant part of active content today is generated by AI. A very significant part of the traffic is also generated by AI. That will be the case, increasingly – which means not just traffic for humans, but traffic for AI… We talk a lot about the access network, but it’s also about the backbone – which also has to take that kind of traffic into account, [and to serve] different requirements in terms of quality of service. A lot of traffic [until now has been served by a] best-approach [network]… That can’t be the case anymore… and this is really what we have to address today,” explained Leboucher.
Next to him, Jeanie York, chief technology officer at Virgin Media O2, disagreed – ostensibly. The platform is in place, she said; rather, the industry has challenges with its adoption and monetization. She responded: “We are there, I think. The underlying tech is definitely there with 5G SA… The problem for the last five years [has been that] we’ve spent billions on [the network without] being able to monetize [it]. But with relatively good 5G SA coverage, you have that capability [now] – whether with privatization in manufacturing or… a differentiated premium experience for football fans at the match on Saturday. We have the capability.”
AI, meanwhile (all the while), is one of the “foundational components” to activate and manage such differentiated services, she said. Despite this, York suggested the overall connectivity experience has not changed much in recent years, and reliability and availability – especially for critical enterprise operations, which remains the first-mover on SA-type capabilities, whether in private or public 5G infrastructure – remain critical areas for improvement. What is more and more important beyond the technology component is the fact that we still have a long-ways to go to make sure that connectivity is reliable and available anytime and anywhere for consumers and businesses,” she said.
Bannon at BT pointed to a “value gap”, increasingly recognised by enterprises, between services rendered and value received. “It is a systemic issue for the whole industry,” he said. “Most businesses… do not have a plan B without some form of connectivity. You still have this in patches… where billions of pounds of transactions [are] running over [a connectivity service] enterprises are only paying thousands of pounds for – where they haven’t invested in resilience or quality of service or slices, or whatever.” Sharp-elbowed cloud providers have hogged the narrative about resiliency and value, he suggested; the plot twist with SA and AI is telcos are being ushered back to the stage.
“Cloud service providers have done a brilliant job with that narrative… [to] attach value to their services. Telcos have to close that gap. [But] where you might argue the cloud providers were gaslighting the industry, saying [to enterprises] that the cloud will just work by itself, they are now saying the network really, really matters,” argued Bannon, going on to talk about the importance of trust and credibility in light of geopolitical uncertainty, and in the context of the industry’s record for sovereign national service provision. “We’re running the air traffic control, the emergency services, the hospitals; we’re protecting the banks, we’re protecting the nation.”
And as such, that value gap is indeed closing, he suggested. “When [enterprise] procurement teams do their evaluation now, they are considering quality and resiliency and differentiation rather than just cost,” he said. “It is incumbent on the industry to… bridge that [narrative] gap… [about] the importance of the network… as part of that overall operational resilience equation.” The case had been put variously, but the question came again: about whether consumers and enterprises – and the rest of the tech ecosystem – still thinks of networks as “utility), or whether the perception has shifted to recognize its strategic importance.
It is symptomatic of the industry’s existential identity crisis of course – just to keep asking the question. This time, in London, the question went to the only non-telco on the panel: Oleg Volpin, president in the European networks division at telco service provider Amdocs. The answer is ‘no, definitely not’ – he reassured the room. People properly understand the critical role of connectivity, he said, and pointed to the power outage last week in Spain, suggesting the consequent network outage (bar satellite) would have been more keenly felt than the electric blackout itself. At the same time, the industry has work to do to make connectivity work well and reliably.
“We need to find this balance and to make it work,” he said.
Sponsored by

AI in the RAN (AI RAN) vs AI on the RAN – different concepts, different questions

James Blackman
May 6, 2025
In telecom, ‘AI RAN’ and ‘AI in the RAN’ (AI in RAN) refer to the same concept, effectively: the integration of artificial intelligence (AI) into radio access network (RAN) infrastructure. The first is a more specific term, for which an official industry group (the AI RAN Alliance) has been named and tasked to drive deep integration of AI into RAN hardware, software, and operational processes. It sets out a future where AI is not just an associated tool, but a systematic part of the RAN function. The second is a more general term, encompassing various applications of AI within the RAN.
Either way, the concept – AI RAN; AI in RAN – is important for new standalone 5G (5G SA) networks as it enables advanced features that would be difficult or impossible without AI automation at RAN level. It enables live network traffic prediction, dynamic resource allocation, and predictive maintenance, while also optimizing handovers, slices, and quality of service. It drives cost efficiency through automation, optimized network deployment, and energy savings, and is crucial for enabling advanced use cases like edge computing and network slicing.
But there is a newer concept for RAN-based AI, as well, which plays into the ecosystem narrative about how telcos will leverage their edge assets to rent space and host workloads (about ‘supporting AI’) – which might be termed ‘AI on the RAN’ (‘AI on RAN’) on the grounds radio networks have under-utilized compute capacity for other (AI) workloads. Actually, it is a sub-set of the whole AI-RAN initiative, borne of twin multi-tenancy and orchestration functions – providing the ability to run and manage RAN and AI workloads concurrently in the same infrastructure.
SoftBank and Nvidia have run trials to show that concurrent AI and RAN processing can be done, and can maximize capacity utilization. Nvidia reckons telcos can achieve almost 100 percent RAN-compute utilization compared to 33 percent for RAN-only workloads – while also implementing dynamic orchestration and prioritization policies to safeguard peak RAN loads. It splits AI-RAN workload distribution into three models: RAN-only, as normal, and RAN-heavy and AI-heavy, according to how capacity splits (1:2 or 2:1) between RAN and AI workloads.
Kanika Atri, senior director of telco marketing at Nvidia, writes in a blog: “From these scenarios, it is evident that AI-RAN is highly profitable as compared to RAN-only solutions, in both AI-heavy and RAN-heavy modes. In essence, AI-RAN transforms traditional RAN from a cost centre to a profit center. The profitability per server improves with higher AI use. Even in RAN-only, AI-RAN infrastructure is more cost-efficient than custom RAN-only options.” Indeed, the whole AI-on-RAN concept was a hot topic of conversation at MWC in Barcelona in March.
Stephen Douglas at Spirent comments: “AI RAN has been around for a while for energy efficiency and spectrum management, and to optimize RAN behaviour. … But we are also starting to see these new variants, about AI in the RAN and AI on RAN – in terms of, say, using RIC applications to drive non-real time network performance to fine-tuning behaviour or improve KPIs, or whether you could free-up GPU capacity in future RAN systems for third-party apps; maybe in low-peak periods or smaller deployments.”
But while the logic looks good, the logistics are unclear. He says: “AI in RAN makes absolutely sense for better energy efficiency and spectrum utilization, and so on. I am less convinced at the moment about this other concept – that if you build RAN on GPUs as well as CPUs, then you can rent space for other applications during idle RAN periods.” Douglas is not the only one to think this way.” He is not the only one. “It is a slightly imaginary concept,” responds Robert Curran at Appledore Research.
He goes on, raising questions more generally about the broader network-edge AI concept: “This idea is that if you can resell capacity, then maybe you can make a business out of it. A first generation of boxes is being built with clever software to manage combined telecom and non-telecom workloads. The problem is the monetization angle – to create a spot market for AI compute. The question is how much compute capacity is left over, and how it can be packaged-up and monetized? And whether there is even any demand for it.
“Which is the same with the whole [edge angle]. Because Germany, say, can run the whole country with about four data centers – without any latency issues. So the idea that you need compute power very close to the customer [maybe] depends on your geography and use cases…. Are companies really sitting around and waiting for this kind of pool of AI capacity – which has to be super cheap and might be taken away at any time because a base station wakes up, and bumps you down the list. Are there applications of that nature? There may be, but it is not clear.”

Verizon on AI: ‘Who better to be the hub and highway – than the hub and highway?’

James Blackman
May 7, 2025
In sum – what you need to know
AI network evolution – Verizon is leveraging AI in three key areas – customer support, product personalization, and ecosystem building – to enhance network services, improve customer interactions, and automate operations for internal teams and external clients.
AI partnerships for growth – Verizon is forging partnerships around its infrastructure with major players like Meta, Nvidia, Google Cloud, enabling businesses to deploy distributed AI workloads at scale and to optimise AI service delivery on fibre and 5G.
AI backbone networks – Verizon says its infrastructure – including networks and data centers – is critical for AI services. It aims to support the AI ecosystem with programmable networks and compute real estate, and to position itself as a key player in the sector.
There are three “buckets” for AI at Verizon, says Verizon – like for most telcos, by extension. “Each is further along in the company than the next – as far as maturity goes,” explains Steve Szabo, vice president for technology enablement in the technology solutions division at Verizon Business. Generally, these cover quite-new AI advances in quite-old ML practices, mostly covering internal functions like customer care and network maintenance. These will advance further with generative AI, and somewhere down the line with agentic AI, where service interactions for both external customers and internal engineers are automated and autonomous, to an extent – and inter-linked, as well.
Otherwise, Verizon Business is using AI to expose more dynamic product service features to its customers, for transparency and management of their airtime and devices, via application programming interfaces (APIs) and as-a-service portals. This work is also developing to include inherent network features in its standalone 5G (5G SA) infrastructure, as well, as it is upgraded and advanced across the US – so services are scalable and configurable for enterprises “at the click of a button”. And just like with its customer support functions, generative and agentic AI will, over time, make its network services more responsive and powerful.
But the big AI play for operators right now is about how their network assets will map into this global infrastructure build-out to support as-yet unknowable AI services. Verizon is playing its part to construct the new AI ecosystem, says Szabo. In January, it announced a new suite of solutions and products called AI Connect (Verizon AI Connect) to enable businesses to deploy AI workloads at scale. In tandem, it signed a deal with US cloud hosting outfit Vultr to let space in its network infrastructure for Vultr to expand its compute footprint and GPU-as-a-service offer.
It has also announced an expansion to a deal with Meta “across network infrastructure… to build the AI ecosystem”, in some fashion. Besides, there is new work with Nvidia to “reimagine” how GPU-based edge platforms will integrate into its private 5G deployments, as well as a project with Google Cloud around new AI solutions for network maintenance and anomaly detection. This is what the third “bucket” contains. Szabo says: “The strategy is clear – to partner with customers like Vultr, or tier-ones like Meta, so they handle the GPU-level stuff and we get them to where they need to go; and as our network programmability advances, for them to use more network services.”
This discussion about how Verizon Business is using AI to serve enterprise customers is set out below, a bucket at a time. But a quick word from Szabo before we get into all of that, which sets out his company’s position – and tells a story for the whole operator community about how to get a grip on AI to improve internal operations and also to play in the external ecosystem.
Szabo says: “These things will happen over a period of time. Everybody has got work to do; some are more advanced than others. But service providers have built all this stuff already. Our challenge is getting it in a way that the new world is used to consuming – in a more digital-fingertips way. That is the challenge for the telcos: how to take decades of network infrastructure and automate and configure it so it is available at scale. That is where we are making targeted investments. But from a horizontal perspective, I mean, who better to be the hub and the highway – than the hub and the highway? These partnerships are great for both sides. It allows them to tap into what we have, and it forces us to level up, and modernize.”
AI in customer service support
The first of these “buckets” bundles back-end AI to help with customer support functions – “the stuff you’re used to hearing about in this space,” says Szabo. The goal, as always with automation, is to improve operational efficiencies; the context, as always with service, is to reduce “friction” alongside – in this case, by placating disgruntled customers, calling in because they have issues with networks, systems, devices, applications, and so on. “Any friction we can take out of the system is helpful. And these types of things are really important,” he says.
The big change, recent and ongoing, with AI in telecoms-based business support systems (BSS), covering all the software that manages customer-facing activities, is to avail the much-maligned interactive voice response (IVR) system with new brain power – so calls are not just directed to the right department, but to the right person within the department. Szabo explains: “What we’ve been able to do [with AI] is to auto-route their problem to the rep with the most experience in that area. It is matchmaking, if you will – between a service issue and a service rep.”
The AI crawls an expanding database of service calls, which records the nature of each inquiry, along with the source of the problem and the author of the solution. “It is not secret stuff; but it has helped tremendously,” says Szabo. As well, Verizon is using AI to sift information – product manuals, software instructions, network alarms, adjacent BSS and OSS systems – so staff can solve issues faster. Previously, even with the match-making, it was down to staff to know and find information in the back-end system (“on five or six screens”), and to further liaise with domain experts.
Szabo explains: “With AI, we can level-up the rep to quickly access and digest information, and to deliver answers – because the AI can siphon through tens-of-thousands of pages of information very quickly. We are seeing a very high rate, in the 90-plus percentile, in terms of the accuracy of the responses.” So how should we grade the AI in these service enhancements? “It is between early AI and generative AI,” he responds. “We are figuring out how to use agentic AI to support self-help work for customers, where the AI takes steps [by itself, prompted by the customer].”
He adds: “At present, we can give an answer quickly; agentic AI will give customers more power to execute steps and functions – if they get an answer, say, and choose to take an action as a result, without a care agent getting [involved]. Really, the stuff we’re working on now is more about the execution models in the agentic space.”
AI in product personalization
The second area of focus for Verizon Business is the personalization of its product suite to meet rising demand – fast mutating into expectation – from data-minded businesses, increasingly born-digital, for transparency and control over their digital services. “We have a good beat on the types of things to level-up customer [service support]; what we need to do is to extend those capabilities into the product suite, itself. They expect low-touch AI management and personalized products, so they don’t have to call up every time they want to make checks and changes,” says Szabo.
As such the company has been embedding AI into its as-a-service portals so enterprises can understand and troubleshoot the performance of their networks, routers, devices, and apps to bring some two-way dynamism to service management and security, as well as to draw on emerging features in the latest ‘standalone’ version of 5G (5G SA). This is where “lots of time” has gone over the last 24 months, he says.
“We are leveling-up our infrastructure to provide these tools and capabilities and insights, whether that is through APIs, where they can pull [the network] into their ecosystem to use however they want, or through our own management and configuration tools, where they can get insights and control over their devices and networks. Because they don’t just want us just to proactively let them know; they want to see it for themselves, and have their own eyes on it. They want complete transparency and visibility, and AI lets us give them that.”
He has an example about this kind of AI chain-of-thought around a sudden data spike on a device in the network. The firm is offering “real-time rating and usage with AI”, he explains, where some root-level AI algorithm prompts actions in response to ML traffic alarms in the network. “If a device goes rogue, and constantly pings the network and racks up a huge bill, then maybe it has been hacked, or maybe it is just on the wrong plan. But now they can evaluate it, right away – whereas previously they wouldn’t have the information until they had their bill,” he says.
So just grade that as an AI exercise, as well. Szabo responds: “It is early-stage AI insofar as you can take a lot of information, and deliver visibility and insights [about ot]. It is not a one-for-one,” he says, suggesting somehow that its value is more profound, and scalable. The point is it is the same kind of pattern, as with customer support – where Verizon’s back-end systems are more accessible, because AI is simplifying information and rendering insights. In this case, these are literally more accessible, because they are being opened to enterprises via APIs.
It is different from just rule-based big data analytics, which has underpinned most service management platforms until now. Szabo responds: “This correlates [responses across] a variety of data sets – network usage and performance, location management, cyber security and cyber threats – to identify potential issues. The network is built, and the first step is to proactively get tools and insights into the hands of customers – to correlate data as AI. The next step is for customers to use AI to automatically change everything on the network – when they want.”
He adds: “But they don’t have those capabilities yet; those are things we’re investing in now.”
AI in ecosystem building
Which gets us, quickly, into interesting discussion about the telco’s role in the broader AI ecosystem, and how to make a virtue of its distributed infrastructure, variously incorporating national and regional data centres, metro-edge multi-access edge compute (MEC) sites, and even cell towers and radio equipment. Plus both its fibre backhaul and cellular access networks, of course – as strung between these properties. This is the third “bucket”, as described by Verizon Business, and it pitches the telco talk about AI as-a-service into this digital gold rush on AI infrastructure.
In the end, it is about the interplay between this industry, building powerful AI networks, and the big beasts of AI, building powerful AI engines, and how they connect to deliver powerful AI services – and where and how business is shared between the lines. “Everyone is doing things in this space, but everyone knows they can’t do everything on their own. So the question is: how does everybody work together without stepping on each other’s toes? That is the issue – that everyone is encroaching on everyone else’s revenue streams,” says Szabo.
“Which just drags out our ability to quickly-deliver what the market wants; I mean, that is my opinion”, he adds, and he goes on to highlight the work of various of these protagonists to draw the lines, and collaborate on common goals. He flags his firm’s own work to make its data centre assets available for companies to deploy AI hardware, and its new AI Connect suite of products and solutions to “enable businesses to deploy AI workloads at scale”. Google Cloud and Meta are taking network capacity for AI workloads; it claims a $1 billion sales funnel for its AI Connect offerings.
It has just signed with cloud hosting firm Vultr, which is to expand its compute footprint and GPU-as-a-service offer via Verizon’s connectivity infrastructure in the US, and to “hook directly” into its fibre network. Szabo says: “It can use its experience [with] customers, and we will power everything on the backend to get it to the co/los, cloud sites, edges. So we become a crucial component for the transport. We will level-up our programmability so customers can go from 1Gbps to 10Gbps – to 100Gbps, if they need it; they can build the pipes in real time to wherever they want.”
He zooms out: “We bring a lot to the table – if you look at our assets and land; our space, power, and cooling; our networks. We have a ton of traction; our funnel is big in [shared and dedicated] fibre – just to transport these AI workloads everywhere. It is pretty remarkable; the requests are coming our way.” There is an argument to say operators are lucky, to an extent; that AI workloads require distributed cloud systems for purposes of efficiency and performance, and that telecoms networks already go to most places, and distribute compute power along the way.
In ways, as well, the opportunity for operators to rent space in their networks and data centres to the rest of the AI ecosystem builds on their original vision for digital services to be sprung from MEC infrastructure. Certainly, there are lessons from the MEC-era, says Szabo – and also steps that should not be repeated. “MEC was pretty far ahead of its time. The device ecosystem and other things were just not ready. But we know the edge is super critical, and people now understand the values and the outcomes better. But the use cases are different – MEC versus this.
“If you just look at MEC as an edge-play to get closer to the customer and the outcome, then there are similarities. But how it is commercialised, and the types of players is very different. Our ability to partner will likely involve us sharing our space with other companies. But it is not just about hyperscalers and telcos; folks can do different things with GPUs at the edge… A lot of that stuff is still in play – which is why, for us, the meat and potatoes is the network – and the space, power, and cooling – because you need those things just to activate where we go next.”

AI in telecoms – what to know and what to ask (see you tomorrow!)

James Blackman
April 7, 2025
In sum – what you need to know
AI for internal telco efficiencies – AI is being used so successfully in customer care and network maintenance that service agents are solving most problems without raising tickets, and network engineers are up to 70% time-richer
New roles in the AI ecosystem – telcos are looking outwards to rent their networks and properties for AI transport and workloads; they also propose a new role as AI ‘middle-men’ brokers between AI service providers and SMEs.
AI issues with diversity and scale – with the AI buzz, telcos are looking to scale AI to more numerous and varied use cases; but problems with data organisation, oversight, and software interoperability persist.
Last chance to sign up for the RCR Wireless webinar tomorrow (Tuesday April 8) on AI in telecoms – about ‘supporting AI, using AI’, and getting to pervasive intelligence in telecoms networks. You can do that here. In the meantime and ahead of the session, here are three broad talking points about AI in telecoms – from interviews separately with ABI Research, Red Hat, Spirent Communications, TUPL, and Verizon Business, who will all be on hand during the session tomorrow to extend discussion of the same, plus more. Should be good; don’t miss it. Keep your eyes peeled as well for the attendant editorial report, out late April.
1 | HOW IT’S GOING – OPERATIONAL AUTOMATION
If you want to know where the telecoms industry with deployment and usage of different AI disciplines, then you could do worse (and hardly better, actually) than to ask Spain-based telecoms-AI provider TUPL, which has been working with T-Mobile in the US to bring some form of intuitive AI automation to customer care (with its AI Care product) and network maintenance (Network Advisor) since about 2017. “We have never looked back,” says Petri Hautakangas, chief executive at the firm. More recently, it has sold an AI service to Deutsche Telekom in Germany to optimise network energy usage across its international op-co footprint.
He says: “We do anything related to the customer experience – which is really about technical care, rather than standard BSS-style vendor offers around price plans and products. And we have grown the functionality of the product and our stake in the portfolio over the last seven or eight years. All our R&D is to make the system low-code/no-code so we can expand use cases and build use cases, based on a similar flow. Which is how we developed [our network product], which automates the repetitive tasks of engineers handling the radio, core, and transport networks – and our energy savings product, as well.”
Its flagship product, AI Care, initially ran on six data feeds (“low-hanging fruit categories”) within the network, he says; it now uses more than 40. “Nothing is static,” he says. “An ML system doesn’t just run by itself, without oversight and new inputs. Engineers or managers, even VPs, always have new data streams and new use cases – to enhance models, improve granularity, deliver new root causes and decisions. And we always say, ‘yep; makes sense – let’s do it’. So it gets better every month, every quarter, every year.” TUPL is enabling ‘ticket avoidance’, so customer queries do not need to be passed to specialist departments, in 95 percent of cases at T-Mobile US.
Which is good for efficiency – and also for placating disgruntled customers, and improving promoter scores and churn rates. Its spin-off product, Network Advisor, reduces the time it takes for engineers to resolve issues by 30-40 percent, claims Hautakangas – “almost from the get-go, based on the first low-hanging fruit”. Time savings rapidly jump to 50-70 percent, he reckons, as the models are tuned to the network environment. These are major operational jumps, then – hard to calculate, definitively, as they tend to be used to augment reduced engineering teams. Its energy optimisation tool, which orchestrates proprietary systems from the network vendors, is newer.
2 | WHERE IT’S GOING – ECOSYSTEM MONETIZATION
And so AI, in some form at least, is working hard and working well in telecoms. If you want a view from the sharp-end, directly from an operator, then Verizon Business has a good line on it – as talked-up and written-down in a separate post last week, about its service automation, product personalisation, and new adventures in compute and network rentals for the broader AI market. So too does test company Spirent Communications, which responds to a question about whether the RCR webinar title (‘supporting AI, using AI’) is in fact the wrong way around – on the grounds AI is for internal usage first – by saying the industry is looking outwards, at last.
“I get you totally, and I would have agreed until Barcelona,” says Stephen Douglas, head of market strategy at the firm, reflecting on how the narrative around “AI-networks-for-AI” unfolded at MWC last month. “There are a couple of dimensions,” he says. “For some, it is about offering a sovereign or private AI capability using MEC infrastructure – to optimise traffic for AI workloads via a regional breakout point. On top, they are looking to partner with big hyperscaler or specialist service partners to offer GPU as-a-service as well. But a number of operators are also looking to be a broker or federator of AI for small and mid-sized enterprises – which don’t often have resources to understand AI.”
These strategies will be discussed in the upcoming report; the logic, reasons Douglas, is that smaller firms (SMEs), often customers already, want a familiar service provider to somehow abstract complexity from a modish new tech provision that is designed to bring automation and intelligence, and therefore simplicity. Douglas comments: “They are now saying: we will connect you and we will also host the right language model – whether that means the big foundation models or smaller domain models, developed with partners. And they will bundle it as a service with connectivity. Which is quite a unique role, and interesting because big enterprises are not the target.”
It takes the Verizon model, discussed here, even further – by making the same use of uniquely featured and distributed network assets, and offering a unique go-between service for a needy and enormous customer segment. “The beauty is that it utilises network capabilities to cope with AI traffic behaviours, and it utilises network assets to host AI workloads, and it also sets operators as the AI customer touchpoint. It is a valid play, and a really important one – because if they don’t make a move now, they will discover in five years they are just a dumb pipe again, with somebody else just running over the top,” he says.
3 | WHAT TO LOOK FOR – DIVERSIFATION & INTEROPERABILITY
Douglas notes how the telecoms industry’s new successes with AI have come through familiarity and focus. “It feels like business as usual now. Operators have gone through that first iteration, and narrowed down 10,000 things they could do into a blueprint of four or five they will do, which have a real tangible benefit.” To a degree, this explains the momentum TUPL is finding with T-Mobile in the US, and various others, unnamed. Douglas even uses the same terminology. “It’s the low hanging fruit,” he says; “things they know they can get immediate value from.” But for ABI Research, also on the webinar, progress will come as use cases multiply, and the industry’s reach extends.
This is the case, especially, as AI is given voice with generative AI, increasingly, and agency with agentic AI, provisionally. “There are so many use cases,” says Nelson Englert-Yang, industry analyst for strategic technologies at the analyst firm; he has a list of about a hundred of them, he says. But they are of-a-type – plucked by carriers as “low-hanging fruit” for OSS/BSS applications, he says – and need to be multiplied across functions. “I want to see greater diversification, and also willingness to try agentic AI as it develops. We haven’t seen much commercial activity around that yet, even though it has been developed,” he says.
The pace of development is outpacing the rate of adoption, clearly. It’s brand new technology, effectively – practically every quarter. “The challenge, as always, is with data – about where it comes from, how it’s cleaned and organised. Because telco data is messy – and useless if it’s messy.” There are teething issues anyway, then, which are magnified as AI is put to work on critical infrastructure. The core network, for example, is “mostly hands-off”, he says. As it stands, AI is “mostly concentrated” on OSS/BSS functions, plus “some higher level apps” and, interestingly, around infrastructure design and optimisation. “It is slow and gradual – because of the critical nature of telecoms.”
If anyone knows about the complexity to deliver secure and reliable AI in critical networks, it is Red Hat, surely – on the grounds the whole principle of open-source software is to share and collaborate in the name of simplicity, scalability, and innovation. “The industry wants to scale AI, suddenly, and so the problems start,” says Fatih Nar, chief architect for application platform solutions at the firm. “That is where the conversation usually gets interesting – because things are easier as point solutions, and harder when they all have to talk together.” Red Hat has just published an excellent article on Medium, which gets into everything in this article; the webinar tomorrow will go further, as well.
Watch the RCR webinar on AI in telecoms – supporting AI, using AI, and getting to pervasive intelligence in telecoms – live and on-demand here.