Sometimes, as a tester, you might feel like your users are experiencing a completely different system from the one that you’re testing and deploying. You might, for instance, devote countless person-hours and resources to finding areas in your network where improvements to the code could lead to better KPIs for Quality of Service (QoS), reducing jitter and packet loss substantially, only to find that your users haven’t noticed a difference. You might even find that your users experience decay in network quality, despite the objective metrics that say otherwise. What gives?
Simply put, a given user’s experience on your network is never going to correlate 100% with objective quality metrics. This might be for any number of reasons—for new 5G services, differences in latency time might be too minute for most humans to notice. For the same reason, users will weight small issues with some services (e.g., streaming videos or video-conferencing) disproportionally to others. Whether their perceptions are rational or not, they’re still deciding whether to renew your services at the end of every month, so churn prevention depends on finding a way to keep them happy. How can network operators and testers pull off this feat? One option is to incorporate QoE (Quality of Experience) metrics into your standard suite of KPIs.
Quality of Service (QoS) vs. Quality of Experience (QoE)
How do QoS and QoE differ, and why does this difference matter? QoS tends to involve more traditional, objective metrics that testers can easily measure when extracting data from the System Under Test (SUT). This includes jitter, latency, packet loss, and other parameters, all of which are critical indicators of your service’s health.
Not only do they give you the objective facts for what users will experience when they try to access services on your network, but they also let you know whether you have any potential compliance issues to worry about (e.g., whether your 5G speeds are within the 3GPP’s official range, or whether you’re successfully completing fallbacks and handovers without dropping calls at the rate that regulators recommend).
For QoE, things can get a tiny bit fuzzier. Since it seeks to show the service quality from a subjective, user-focused point of view, you need to move beyond existing QoS metrics. To begin with, you might establish comparative metrics based on the numbers other service providers are showing. These metrics need to be truly End-to-End because many factors in the SUT may be playing a role. If a file download is slow, the end-user doesn’t know (and often doesn’t care) if the cause of the delay is at the server or in the carrier network—they know that their service isn’t up to the standard of quality that they’re paying for. From there, you can begin to develop mean opinion scores for how different users experience various facets of your service.
By combining such QoE estimates with technical QoS measures, you can give a better picture of how network performance translates into customer satisfaction. Since user experience data is such a reliable indicator of the churn risk, cross-validating QoE estimates with network performance data allows you to home in on the weak links in the service delivery chain. In this way, you work towards more optimal retention plans.
How POLQA can inform QoE measures
We alluded briefly above to some potential avenues for developing QoE metrics, but let’s dig a little deeper into one of the existing standards being put to use right now to sketch out QoE estimates: POLQA (Perceptual Objective Listening Quality Analysis). For audio QoE, this is the closest thing telco operators have to globally agreed-upon benchmarking systems.
The POLQA standard has been around since 2011, and it uses a collaboratively designed algorithm to model your network’s audio quality as a mean opinion score. In this way, you can complement your existing QoS test data to identify potential service faults and address them in a way that will speak specifically to users’ issues.
The trick here is to incorporate these measurements seamlessly into your existing testing streams and reports. This requires an automation framework with the flexibility to integrate with other tools and cover a wide range of functionality in an agile way. For starters, you’ll need something that uses real, out-of-the-box mobile devices to orchestrate testing. However, you might also look for a solution that offers robust reporting and analytics capabilities. You can uncover the true meaning and any hidden correlations in the data that you capture.
QoE Beyond End-to-End
So far, we’ve talked about QoE in terms of end-to-end tests, but what about testing beyond end-to-end? For instance, once you’ve got your test environment automated, you can incorporate an Audio Matrix to begin getting more objective measurements for things like sound and audio quality that aren’t covered by traditional End-to-End tests.
The Audio Matrix is based on virtual sound cards and facilitates tone pattern identification, acoustic fingerprinting, and the POLQA speech quality algorithm using digital audio transport.
Why leverage your resources on something like this? These audio quality metrics give you the foundations you need to begin putting user experience front and center in your tests. End-to-End tests like tone validation can give you a simple pass-fail on whether the sound is coming through; this gives you real insights into the things that your users really care about. By combining this with stuff like trace captures and CDR analyses, you begin to develop a much more comprehensive picture of network functionality than you ever could have formed in the past. With more knowledge in hand, you can address issues that arise in your network more quickly and effectively. Thus, by combining QoS and QoE metrics, it suddenly becomes possible to address the factors that most significantly impact churn.