5 Factors That Impact Telecom Test Quality

It’s no secret that test quality has a direct impact on quality of service, meaning that high quality tests can and do correlate with telco operators’ ability to attract and retain customers. And yet, as telecommunications networks become more and more complex, maintaining high quality tests for things like subscriber migrations, new network rollouts, device acceptance, etc. is becoming more difficult and time consuming than ever.

Obviously, testers need to find a way to maintain coverage and quality levels—even in the face of growing network complexity—but the path to doing so is not always clear.

For that reason, today we’ll be focusing on some of the most important factors that impact telco test quality. Hopefully, this will help you to identify some of the aspects within your own testing operations that could be improved upon, while suggesting potential paths forward for better positioning your test frameworks within the context of network quality and ROI. So, without further ado, let’s dive in!

1. Time Pressure

Telco testing is, of course, a technically complex process to begin with—and we’ll talk about those technical complexities in a bit. But for starters, it’s worth pointing out that one of the biggest hurdles to high test quality is simple time pressure.

If you don’t have enough time to design your test framework, establish relevant use cases, perform the actual service verification, and document your results within whatever timeline has been handed down, you’re going to have to either cut corners, spend money on outsourcing, rope in more engineers, or sacrifice test coverage.

Any of which can lead to problems down the road. After all, if you’re relying on manual tests, there’s not much you can do speed up service verification without sacrificing quality.

2. Complexity

As if time pressure weren’t bad enough, the increasing complexity of the modern telco environment is actually decreasing the number of test cases that your average test engineer can perform in a given day.

As the number of legacy systems that require testing increases and new devices flood the market, the potential hurdles immanent in any given deadline are actually exacerbated. Not only that, but high levels of test coverage become harder to achieve—to say nothing of the added difficulties that stem specifically from the complexity itself.

Your organization needs to keep amassing new knowledge about testing and best practices while internalizing the new standards that bodies like 3GPP are handing down to deal with new technology—no mean feat in an environment where engineers are often already spread too thin.

Unfortunately, many of the tactics that network operators use to improve coverage (outsourcing, e.g.) help speed things up at the expense of developing this knowledge base. When this happens, all of a sudden your test quality is dependent more on vendor quality than on in-house capabilities.

3. Resources/Expertise

This next factor is, in some ways, a synthesis of the first two. Your resources (whether that’s the number of devices and test engineers you can devote to a test or the accumulated domain knowledge within your organization) are ultimately the factors that determine whether a test will cover enough of the right areas to be worth your time and money while effectively verifying service.

To wit, time pressure affects test quality insofar as it impacts the time-bound resources at your disposal; complexity does the same based on your organization’s ability to understand and grapple with that complexity.

Why is this synthesis of ideas useful to you as you try to get a better handle on your test quality? Because it boils things down to a simple question: “do I have the resources to efficiently and accurately verify service?”

If the answer is yes, then you’re on pace for high quality testing. If the answer is no, then you need to figure out how you can increase or stretch your existing resources to cover the testing demand. If you’re doing things by hand, this might mean looking into automation.

4. Virtual vs. Out-of-the-Box Devices

So far, we’ve been talking about things that could easily apply to end-to-end testing across multiple different fields. This one, however, is a little more specific to the telecom domain: are your tests being performed on the specific equipment that users will actually make use of out in the field (i.e. out-of-the-box devices), or are you using rooted, jailbreaked ,or simulated devices?

This can be a major determining factor in the accuracy of your tests and your ability to ensure network functionality. The more complex and varied telco networks get (and the more demanding your average customer becomes), the greater the likelihood that small differences between simulations and reality will mask real gaps in your functionality.

If you’re testing VoLTE across a handful of simulated Android devices, for instance, tiny differences in packet loss or latency between the simulated devices and real, end-user devices might cause testers to deem something a successful call that, in the field, won’t meet customer standards.

5. Use Case Design

Lastly, we get to use cases. How effectively are you defining use cases that require verification, how successfully are you scripting or otherwise orchestrating those use cases, and how many of the relevant use cases are you running through?

This is partially another way of saying that testing coverage is one of the biggest predictors of testing quality—but it’s also a suggestion that, given the new levels of complexity telcos are dealing with, a use case-based mindset is becoming increasingly crucial.

As we move towards more widespread 5G adoption—and sub-millisecond latency times become the norm, driving an influx of IoT and other new devices into the telecom world—use cases will continue to proliferate.

Test engineers will have to get smarter and smarter about defining and carrying out test cases in order to keep pace. If you’re unable to define appropriate use cases as your network’s usage changes