It’s been a common refrain in the past several years: “Who will win the race to 5G?” Sometimes media outlets and prognosticators are talking about individual countries—will China or the US be the first to support large-scale rollouts of 5G infrastructure?
And sometimes they’re focused on specific companies—is AT&T offering 5G already?—but the suggestion that there will be a distinct first-mover advantage for rolling out this service remains a constant throughout the discourse.
It seems rare for the public to pay this much attention to telco infrastructure, but within your average telco operator, it’s not uncommon to spend much attention to the speed with which you can roll out new services. Sure, not everything is as high profile as 5G.
But if you’re rolling out a new roaming partnership with another network, you want to get it up and running as quickly as possible so you can delight your existing subscribers and start to attract new ones.
Unfortunately, optimizing your time-to-market isn’t just a matter of lighting fires under your engineers. Once the new service has been designed and built out it needs to be tested, and any reported bugs or errors need to be fixed.
This is where things tend to get hairy—especially if your testers fall into any of these time-to-market pitfalls.
1. Failure to Automate
Okay, we’ll start with the most apparent pitfall that can slow down your time-to-market: manual testing. If you’re rolling out a new HSS database, for instance, and have 200 test cases that require verification before you can hit the go-live button, you’re potentially looking at 30+ person-days to achieve full test coverage. This assumes that testers will be able to power through about six use cases per day, which is a typical manual test velocity (at least in SEGRON’s experience).
This means that you’re either throwing an army of test engineers at the problem to get all of the necessary functionality verified in time—often a costly strategy—or you’re potentially waiting weeks to get your new subscriber database off the ground. Neither seems like a good option, which is why an automated testing framework is becoming a practical necessity.
2. Low Testing Visibility
In an automated testing environment, you should be able to run through hundreds of use cases per instance per day—which is enough to improve time-to-market in and of itself. That said, not all automation environments are created equal. For instance, if you’re running through your tests in a matter of days, only to find that root cause analysis and mediation is taking longer than ever, you might be grappling with poor visibility.
What do we mean by visibility? Essentially, whether or not a user within your organization who needs to access and understand a piece of information is able to do so quickly and easily. Creating high visibility is chiefly a matter of two things.
- Creating accessible tests: This requires a relatively centralized framework for stakeholders to access as needed through a functional user interface. Everyone within the operation should know how and where to find test reports.
- Producing readable test reports: When a stakeholder does access a test report, he or she should be able to understand its contents fairly easily. Which tests were run, which tests failed, and how long the tests took should all be visible.
For more on readability, see our next section below.
3. Complex Testcase Scripting Languages
You can easily imagine why hard-to-read tests would slow down time-to-market, and it’s no doubt just as easy to imagine why the scripting of testcases could be a significant sticking point in the first place.
No matter how quickly you’re able to run through testcases, your overall testing time is still going to be high if you have to spend multiple days scripting up new tests in a complex proprietary language whenever you want to test something. There are two factors that work here:
- Ease of scripting
Ideally, you want both. Testcases that are difficult to script up take longer, but even straightforward scripting takes time. Thus, the more quickly and easily you’re able to reuse your existing scripts, the better. This is a scenario where keyword-based testing (like the sort powered by Robot Framework) can be a boon to your testing operations, as it offers both easy scripting and easy reuse.
As individual network elements change, you can redefine them in the global variable files and keep on using them in your regression tests or other test suites as needed. What's more, it can power automated keyword-based reporting to address the issues brought up above.
4. Drive Tests
This pitfall mostly speaks to people’s perceptions of automated testing—especially perceptions that don’t correspond to reality. For instance, many test engineers continue laboring under the belief that certain test cases, particularly those related to mobility, can’t be automated.
On the contrary, an automation framework that’s capable of controlling mobile devices, modems, and network elements should also be able to control shielded boxes and attenuators. In this way, they can mimic a waning LTE signal, forcing mobile phones to perform handovers, SRVCC changeovers, among others, just as they would do out in the field. This is a case where merely being aware that it’s possible to automate something can help you to avoid a slowdown in your time-to-market.
5. Choosing the Wrong Time-to-market Metrics
The strategies discussed above will go a long way towards keeping your time-to-market competitive, whether you’re rolling out a critical update to your core network or adding early 5G elements to your network. This will help you to win subscribers and maximize your bottom line. Simultaneously, the cumulative operational benefits will depend on how, accurately, you measure time-to-market.
Given the complexities of a modern telco network, a simple “number of days from conception to go-live” doesn't always cut it. If you shave your time-to-market for a new service offering by 20%, only to find in later regression tests that your network has a host of new issues that need to address, you may suffer some subscriber attrition despite your speed.
Conversely, a short delay that improves the quality of your test scripts—and thus shortens all future regression testing cycles—might make your network more responsive in the future, keeping happier subscribers. We’re certainly not suggesting a “slow and steady wins the race” approach—but in a highly visible, intelligently automated environment, it’s possible to be fast and steady.