With the advent of new technology like the IoT (Internet of Things) and 5G, the pressure to improve companies’ time-to-market has never been higher. Sure enough, there is likely to be a real first-mover advantage in the coming years for businesses that can roll out new service offerings that take full advantage of sub-millisecond latency times or vast networks of connected devices. As you can imagine, this puts tremendous pressure on testers who have to navigate ever-increasing layers of complexity to verify functionality on their networks.
Since testing velocity is such an essential factor in time-to-market, it’s understandable that companies would continuously be on the lookout for cost-effective ways to speed up testing. This often takes the form of automation—which can improve your test throughput from half a dozen test cases per engineer-day to hundreds—but not all automation is created equal. Below, we’ll discuss some of the ins and outs of automated testing, and how specific techniques, mindsets, and tools can speed up testing.
This first suggestion might seem paradoxical, but test velocity is often negatively impacted by the feeling that testing flows must be started from scratch each time a new service offering is coming down the pike. This is partially a question of test case scripting (which we’ll cover shortly), but it’s also a matter of how you organize your test flows. If you’re waiting to reach a critical mass of code before committing it to your network staging environment, you may be more likely to encounter issues and find that some new changes are incompatible with one another, which will ultimately slow down your service rollout. If, by contrast, you adopt a Continuous Integration (CI) mindset—by committing smaller pieces of code and then building and testing them automatically as you go, i.e., testing continuously—you can reduce the probability of errors and ultimately create a more stable service offering and reduce your test cycle time.
2. Eliminate Tests
It’s become something of a commonplace to suggest that the key to successful automation is knowing what to automate. Often, the suggestion here is that you shouldn’t try to automate tests that are better suited to manual workflows. This is undoubtedly true, but with testing, there’s always a risk of doing the opposite: assuming things can’t be easily automated even when they can. Take drive tests, for instance. As a tester, you might wonder how it’s possible to automate something like a fallback to a legacy network, given that manually testing this use case involves long drive tests as testers try to find a weak enough LTE signal that the phone-under-test will initiate the fallback sequence. However, this use case can be easily slotted into an automated workflow by using attenuators to mimic a waning LTE signal. Thus, by making sure you automate the things that can be automated easily (including fallbacks, handovers, roaming, and other mobility-related use cases), you recapture all of the time you would have lost on drive tests.
3. Implement Keywords for Testing
Another commonplace, at least among testers in more traditional software environments, is that modular testing flows can help speed up the verification process. It is clear that if you can reuse elements of your existing tests instead of laboriously script up new ones every time you’re testing a new offering, you can considerably cut down total testing time. The question is, how exactly do you actualize that kind of modularity in a testing environment? One potentially promising option is to adopt keyword-based tests, like the ones powered by Robot Framework. In keyword-based tests, test scripts are built using pre-defined, readable keywords that refer to specific network elements or actions. Thus, instead of writing new test scripts from scratch in some arcana proprietary scripting language, you’re able to reuse existing tests in unique circumstances by swapping the relevant keywords in and out. This has the added benefit of making the test process more accessible to less technical users, all while powering readable reporting that can help you identify root causes that much more quickly.
4. Resist the Temptation to
Above, we talked a little about the importance of knowing what to automate. Now, let’s talk about learning what to virtualize. If you’re taking our initial suggestion and implementing some sort of CI workflow, some Network Functions Virtualization (NVF) will probably come in handy. However, it’s essential to make sure you’re still conducting your tests on real, out-of-the-box smartphones in a virtualized network environment. This means avoiding tests done on virtual phones and steering clear of phones that have been rooted or jailbroken. Why is this so important? Because tiny differences in firmware and other specifications (i.e., the results of hacking into a device to gain root access) can have a significant impact when you’re trying to measure, say, your ability to provide sub-millisecond latency times. As those differences in test results accumulate, your risk of missing network issues increases. Though it might speed up tests superficially in the short term, the long-term impact is that you’ll spend much longer troubleshooting your network, since your new service offerings will be built upon buggy code begin with.
5. Clearly Define
We tend to think of most issues that impact test velocity as technological issues—but it’s just as crucial to consider the operational elements of quality testing workflows. To wit, one of the things that can most easily slow down verification is confusion over who’s supposed to do what, when. Whatever technology, tools, or techniques you use to test your network and improve your time to market, make sure you have clear guidelines in place for who’s supposed to perform what tests, who’s responsible for root cause analysis, who has access to which test results, etc. Every company is different, but if you have a clear directive and a robust chain of command in place, you’ll be much better positioned to find the testing strategy that works best for you.