Table of Contents
The frenzy to adopt AI often overshadows the need to evolve one’s testing approach, but autonomy cannot be achieved without rethinking testing
Business executives have always been quick to acquire and deploy the latest technologies in the network to make it faster, more performant — and ultimately autonomous. But adoption of AI at a giddy pace has brought up new challenges ranging from operational disruptions to security breaches to costly downtimes, leaving operators scrambling.
While there is no denying that AI capabilities can make a network more intelligent and measurably self-driven, eventually enabling telecom providers to deliver a low-touch, low-friction experience to users, but it takes a tangible set of steps to effectively integrate AI without shaking things up. That begins with testing.
“There’s a major journey going on today in our networks. This is a journey towards autonomous networks through the use of AI — and this requires different levels of testing and autonomy,” said Stephen Douglas, head of market strategy at Spirent, during a webinar with RCR Wireless News.
Douglas says that integration of all AI applications must be guided by careful and routine testing — a process that begins at the design phase, and continues through the lifecycle of the applications.
TM Forum’s autonomous network (AN) framework breaks down the various levels of network automation into 5 levels that can be broadly categorized as manual, guided, and autonomous. When applied to network testing, these steps reveal a roadmap to a full autonomous network for telcos.
Level 0 is where things are completely manual. Here administrators sift through network analytics and conduct tests by hand. There is no AI or automation involved whatsoever, and humans are in charge.
Level 1 implements low levels of autonomy and use basic machine learning (ML) to accomplish select testing tasks that are repetitive in nature. Although assisted, this too mostly is human-driven.
Level 2 is where predictive AI comes into play for the first time. However, its use is very limited, and therefore, only provides partial autonomy. With humans as the drivers, this level sees use of AI in discrete cases, like continuous testing in specific sub-domains with static, closed loops.
Level 3 moves up from a part-human, part-autonomous state to that of guided automation, driven by a combination of predictive and generative AI capabilities. GenAI is used to perform continuous testing within a given domain in dynamic, closed loops. Required human intervention is minimal at the conditional autonomous level, although need for supervision is still substantial.
Level 4 is termed high-autonomy for its use of closed loop automation for select cases. Continuous testing is carried out across domains with dynamic, closed loops, requiring very little human supervision.
Level 5 or fully autonomous represents the ultimate goal for CSPs. This network anticipates the needs of the operators and the users, and self-configures in real time delivering optimal performance. Testing capabilities here are self-adapting and operate across domains and third-parties, requiring zero intervention or supervision.
“Though it is still early days for AI we are already seeing substantial impacts to telecom network infrastructure with every sign pointing to continued high velocity change,” Douglas wrote in a blog.
At present, a lot of telco providers are at the lower rungs of autonomy, but the industry is trending towards complete autonomy which will require progressive use of AI-enabled testing to achieve, he said.
When rigorous testing is embedded through providers’ AI adoption journey, it enables clearer evaluation, validation, and monitoring from design to production — a strategy that is key to building trust and transparency in AI systems. However, it’s important to remember that its success is ultimately contingent on how well-aligned testing is with your AI maturity model.