Part 2: Where test and measurement companies are actually implementing AI

Home Test and Measurement News Part 2: Where test and measurement companies are actually implementing AI

In a two-part series, RCR pulls back the curtain on how test and measurement vendors are using AI within their business and product lines

With telcos bringing AI to the mainstream, and adoption picking up in all parts of the network, it is timely to look at how test and measurement vendors that are usually on the other side of the line supporting them in their AI journey, are themselves utilizing the technology to improve their processes and better serve their customers. In this two-part series, we try to understand how AI is transforming testing inside out.

Conversations around AI implementation in the T&M sector tend to focus on lowering process burn and cutting turnaround times. In our previous conversations, we learned that companies are using AI to iron out wrinkles in test processes, reduce work for test engineers, and achieve higher accuracy in test results. 

Additionally, we heard from several vendors that they are actively using AI to upgrade their wares to make the test experience less technical and more intuitive for customers.

Sophie Legault, senior director of product management at EXFO, said that her company is infusing AI into its solutions to reduce the burden for customers. “We embed AI directly into our test and measurement solutions and software platforms. This helps our customers operate more efficiently while managing increasing network and data center complexity.”

EXFO is additionally lowering the bar for users with AI. “By applying AI to test automation, analytics, and guided workflows, we enable users to execute complex validation tasks with greater accuracy and consistency — regardless of individual skill level,” she said. This helps plug knowledge gaps between different personas, allowing them to step into roles and responsibilities formerly reserved for specialists and experienced engineers only.

The company is also using AI models to separate intelligence from real-time measurement data by making correlations. “Our customers benefit by gaining valuable insights from correlation across large volumes of optical and transport measurements,” she added.

Sameh Yamany, CTO and chief AI officer at Viavi, said, that a notable use of AI in T&M relates to the rise of AI nativity in 5G and 6G architectures. As telcos integrate AI into the core of cellular network architectures, they need predictive intelligence to keep the network running smoothly on auto-pilot. 

“This requires the use of AI-powered RAN [radio access network] scenario generators to build high-fidelity digital twins that enable the development and validation of algorithms for the xApps and rApps in these architectures,” Yamany noted. Both xApps and rApps are automation software used for operational efficiency of the RAN.

“We are heavily involved in a number of 6G city-scale network projects in both the U.S., Asia, and the EU. These projects seek to more effectively model the effects of network conditions on application performance, and the effects on implementing FR3 bands, as well as how to minimize RAN power consumption without affecting QoE [quality of experience],” he added.

Elsewhere, Viavi is also using similar approaches to power AIOps and autonomous network management systems like the Dark network operation centers (NOCs) to better manage enterprise and AI network security threats. 

In AI model training, bad data in is bad data out. To help avoid that, Viavi serves its customers AI-ready, governed and contextualized assets for model training, drawing from instruments, sensors, probes and assurance platforms as authoritative sources across the optical, wireless, RF, IP, timing, and sensing domains.

The inclusion of AI in T&M processes is a natural progression that is more about keeping pace with and sustaining AI innovation, and less about embracing a hot new tech. “The rise of AI, or more specifically the bursty east-west traffic patterns seen in training models, means traditional T&M throughput tests are not enough. We now need to look at how the network handles the traffic jams that occur when thousands of GPUs try to talk to each other at the exact same millisecond,” Yamany said.

“Strategies need to incorporate fabric-aware validation methods that take these factors into account, and must also factor in congestion cascades that build as each failure directly triggers the next, and tail-latency sensitivity to manage the straggler packets that can cause an entire AI training session to grind to a halt,” he said. 

What you need to know in 5 minutes

Join 37,000+ professionals receiving the AI Infrastructure Daily Newsletter

This field is for validation purposes and should be left unchanged.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More