Okay, you’ve decided to take the plunge. You know that in order to keep pace with all of the new devices, network protocols, and use cases emerging every day in the telecom domain, you need to do something about your testing framework.
Your internal engineers can’t manually test all the required use cases anymore, and service verification is only going to get more complex as technologies like 5G enter the scene. You’ve even done your research via Google and your professional network to see who the worthwhile vendors might be for a telecom testing solution. Now you get to the hard part: how do you choose between them?Obviously, the right choice will vary a lot from one company to another depending on the particulars of their operational needs and goals. That said, we have a few tips that we think should apply in virtually all cases—comprising a mix of telco-specific suggestions and criteria that should apply to the choice of any sort of vendor.
For decision-making purposes, you can think of these 5 suggestions as a kind of checklist, comparing your various options based on how well they perform in each category. By doing so, you can gain a clearer picture of the salient differences between different solutions, and ultimately make a choice that works for your business.
1. Look for Scalability
The first test for any potential testing provider—almost any vendor, really—is scalability. How effectively are they able to provide the right deliverables at the right time, even as your needs change?
If, for example, your plan is to start by partnering with an automation provider for your network regression tests (since it’s hard to justify engineer hours on the necessary ongoing basis required for effective regression), and to potentially increase the scope of automation to include functional testing, conformance testing, service verification after updates, etc., you need to be sure that your vendor can accommodate your needs even as they evolve.
If a solution can be scaled up or down as needed with relative ease, it suggests not just flexibility on the vendor’s part, but a strong underlying framework that can be adapted to a variety of situations.
2. Avoid Simulated Tests
Okay, this suggestion is a little bit more specific to telecom testing, but we think it bears mentioning: because end-users are increasingly conscious of things like latency times, testing on simulated devices has the potential to result in significant service verification gaps.
Because the differences between high and low quality of service are getting smaller and smaller, the slight differences between simulated devices and real, out-of-the-box devices that your end-users will be utilizing can quickly become magnified.
A fraction of a second’s difference in sending or receiving a packet can make or break customer perception, and the only way to be sure that you’ll perceive that fraction of second in testing is to make use of those out-of-the-box devices.
Sure, simulated tests are often touted as an expeditious way to automate testing—but where’s the value in automating something that doesn’t accurately verify service?
3. Consider Test Coverage
Now, one of the most common reasons that companies site for partnering with a third party for their testing is that, left to their own devices, most teams of test engineers can’t work through very many test cases in a given day.
In recent years, the industry average has been around 6 use cases per tester per day, which means that you essentially need a small army of testers, outsourced testing, or automation in order to gain acceptable levels of test coverage.
Of course, test coverage has a direct impact on network quality—since you can’t fix the bugs you don’t discover. For this reason, when evaluating any sort of solution, it’s critical that you ask yourself: how (and how effectively) will this help me improve my test coverage? This might seem straightforward, but there are some deeper questions underlying it can the solution in question scale?
Can it run through a sufficient number of use cases per day to be truly effective? Can it cover different types of testing (e.g. conformance, functional, regression, device, etc.)? Start by mapping out all of your various use cases that require testing and seeing which ones fall within the purview of any given solution. Extra points for anyone that offers true end-to-end capabilities.
4. Demand High Quality Reporting
No doubt, many of the vendors you’re evaluating will boast impressive technical specifications, with a host of fancy modules that can barely be explained to the layman. But how do those translate into results for your testing operations?
That’s right, with clear, readable reporting. For this reason, when evaluating a solution, it’s critical to find out what its reporting functionality looks like. If, following a completed test (whether it’s via testing-as-a-service or automation workflows that you host on your own servers),
you’re left wondering what all of the results mean and how to improve your service offerings, you’ve essentially reduced your ROI by committing large quantities of time to a process that was supposed to save time in the first place. By contrast, clear and readable reporting helps to bridge the gap between the physical act of testing and the aftermath of that testing in a way that preserves value.
5. Make Life Easy
This last suggestion is related to the one above, but it’s important enough that it deserves its own section. Is the solution under consideration—in addition to being scalable and non-simulated, with high coverage and reporting quality—actually easy to use?
Again, you don’t want to throw good person-hours after bad, so to speak. Ease of use here encompasses not just how readable the reporting is and how effectively it scales, but also how easy it is for engineers on your end (in the case of an automated solution, for instance) to program test cases for the solution to carry out.
How easy or difficult is it to repeat tests, or to modify existing use cases? Finding this information out might require a product demo, or at the very least some more in-depth information from the potential vendor, but it’s a good litmus test for how likely your test engineers are to actually use—and extract value from—whatever solution you choose.
If a vendor can demonstrably provide a high level of ease of use, along with the other things on this list, then it’s fairly likely that they can deliver for your testing needs.