Table of Contents
Spirent said the new feature will allow operators to test clusters’ ability to support 5G cloud-native network functions pre-deployment
In sum – what to know:
A new benchmarking function — Spirent just added an infrastructure benchmarking capability to the larger Landslide test application.
Repeatable test workflows — The feature provides test workflows that can be used to benchmark existing and new hardware.
Integrated insights — Metrics are exposed through Grafana dashboards for continuous visibility.
Service providers are moving base from a vertical siloed architecture to one that is horizontal. This new multi-vendor architecture unlocks significant TCO savings and provides on-ramp for deploying new and advanced cloud-native network functions (CNFs). However, ensuring reliability and resiliency of the underlying infrastructure is paramount to ensuring seamless deployment of CNFs, as teams often face infrastructure inefficiencies that tend to choke the applications’ performance in production.
“Our customers moving into these horizontal stacks are saying, ‘How do I know that there’s enough CPU available when I go to deploy the CNFs? How do I know it’s efficient and not wasting resources?’,” Bill Clark, principal product manager, 5G Cloud-Native Deployment Validation at Spirent, said in an interview with RCRTech, noting the growing demand for deeper visibility into infrastructure resources.
One way to avert infrastructure bottlenecks is to benchmark the infrastructure prior to migrating the workloads. Spirent recently added a new benchmarking solution to its Landslide suite of products that provides an easy way to do this.
The new cloud-native infrastructure benchmarking feature helps operators establish baselines across four quadrants – CPU, memory, storage, and network. Here’s how it works. The feature offers prebuilt drag and drop benchmarking test cases for testing and quantifying infrastructure resources. The Landslide Cloud Engine simulates workloads and traffic profiles to benchmark system behavior under real-world scenarios.
The test results include granular metrics like memory utilization, storage I/O, network bandwidth, and CPU throughput, that shine light on how the infrastructure or certain cloud-based resources will behave, or how efficiently it will perform under a given condition. The tests capture data on metrics such as utilization, and allocation efficiency from the Kubernetes layer, which gives infrastructure teams information to rightsize resources at their end.
As noted earlier, operators can use the drag and drop, out of the box workflows, or manually configure scenarios to emulate their precise operating conditions and service expectations for accurate testing.
One of the key highlights is repeatable automation. Users can run the same tests across different resources with just a single click. “You could have as many or as few as you want, but that test is automated,” Clark said.
But what if hardware systems get replaced over time? The automation still remains functional, Clark told. The same workflows can be triggered to run benchmarking test on new hardware systems as they are onboarded.
The benchmarking test is built in through the APIs. “So when there’s a new hardware installed, it can trigger through the pipeline, automatically doing the specific benchmarking test — and automatically tell either through the APIs or through UI, what is the efficiency of this specific server,” Clark explained.
The solution integrates with the CI/CD pipeline, and exposes data through Grafana dashboards.
Clark told that trials are underway with Tier-1 operators, and as of now, the solution is generally available on Landslide.
Spirent’s new benchmarking feature offers infrastructure operators and CNF teams a shared platform to measure performance and readiness of the infrastructure — and proactively address likely performance issues, outages, and slower deployments ahead of time through their cloud-native transformation journeys. The solution expands Spirent’s portfolio for infrastructure validation for cloud-native 5G networks, making it particularly appealing to operators undergoing the transformation. Overall, the benchmarking capability makes for a timely addition, as the rest of the industry moves towards embedding test and observability into cloud-native workflows.