Ever since Gartner named hyperautomation as one of its top technology trends for the coming year, there’s been a flood of discussion about what, exactly, hyperautomation might look like in practice. Will it mean robotic processes that learn automatically and increase their efficiency over time? Or maybe it will speed up root cause analysis for testers by combining test reports with machine learning-based analytics? While there isn’t a consensus to speak at this moment, the options presented are all intriguing. If your company is aimed at digital transformation, this is a trend that’s worth following.
However, for all of the speculation, something often gets lost in the shuffle: what’s the value proposition? After all, even the world’s coolest technology needs a practical application—some way of adding and demonstrating real value for users—before it makes sense for enterprises to adopt it. When it comes to hyperautomation in particular, stakeholders and decision makers need to know how it’s going to speed up testing, how it’s going to save money, and how it’s going to increase the stability and resilience of your systems.
Intelligence Device Automation
Of course, it’s tough to talk about the potential ROI calculations you might make when we don’t know what the technology itself is going to look like precisely. That said, we can certainly make some educated guesses. For instance, SEGRON uses artificial intelligence to automate new flagship devices for testing as they come on the market. Over a few hours, the AI program learns how to use the phone, and translates that learning into instructions that our RPA-based test automation framework can use to perform verification passes. In the era of hyperautomation, this kind of workflow might become faster and more sophisticated—potentially increasing the speed with which devices are ready for use by testers while powering more comprehensive communication between the AI and the RPA.
Though the specifics of how, exactly, this would play out in a practical setting will only come into focus over time, the ROI implications are relatively straightforward on their face. The more quickly you can automate flagship devices, the faster you can incorporate them into test flows, which means:
- Better network quality for early adopters using new phones on your network
- A reduction in expensive downtime while testers wait for new devices to be ready for testing
- A potential competitive advantage over competitors who don’t offer high service quality for users on the latest Android or iOS devices.
Thus, you save money by way of saved testing time, and you potentially decrease subscriber churn. No doubt, it’s more expensive to acquire a new subscriber than to retain an existing one, so the cost-benefit here is nothing to sneeze at. These ROI benefits are, of course, before we factor in some of the more intangible changes we alluded to above.
Moving Process Automation
Like we sketched out above, much of the hype around hyperautomation comes from its ability to combine process automation and RPA with AI and machine learning-powered technology. For processes that already benefited from RPA, this means that hyperautomation can amplify those benefits and improve ROI. If we take, for instance, live network regression testing or application testing, we can picture how process automation already makes life easier. It performs suites of test functions from dialing and calling to using mobile data and making emergency calls, powering through individual actions on out-of-the-box mobile phones so that human testers don’t have to do so by hand. In this way, process automation speeds up regression tests, making them feasible every time you have updates in your system or network. Thus, quality improves—and so do margins.
This can already be a big deal in terms of ROI. Still, the results can be even more striking when you reach the next level, i.e., cognitive process automation (also known as cognitive robotic process automation or just cognitive automation). This would, again, combine RPA and AI to create smarter workflows. This might mean that an automated process performs tests based on your specified test suite and can make simple decisions to begin part of the troubleshooting process if it fails. In a situation like this, you might expect to save time not just on testing itself but on test aftercare and root cause analysis.
Okay, we’ve some of the ways that new technology powered by hyperautomation might improve upon specific AI- and RPA-based innovations. But what happens when you move past that? Is it possible that new technological paradigms will change the way testers interact with—and extract value from—more in-depth testing practices like applications, CDR analysis, voice/audio analysis, and signaling trace analysis? There’s plenty of reason to believe it is. Right now, smart automation workflows make it possible to automate these testing techniques, but there’s no reason to think that the technology we have now is the final word on the subject.
If the hyperautomation trend picks up steam, this could be one of the next frontiers in value addition. Instead of merely capturing signaling traces from the system-under-test, a hyperautomation-enabled workflow might also be able to analyze some of those traces automatically. Thus, live regression tests might be able to alert you to a network quality issue before it results in a failed test. Your system might help you schedule downtime and maintenance time proactively for various network elements to minimize cost and disruption. This is an emerging trend within industrial businesses (explicitly relating to machine downtime) that hyperautomation could make it possible for all kinds of companies. If this were, hypothetically, to become a reality, it would mean a decrease in the costs associated with disruptions—meaning that you could put more of your resources into creative, analytical, and otherwise value-additive tasks.