What to Look for in Test Automation Reporting
- Reading Time: 5 minutes
Let’s say you’re in a management position at prominent European network provider, and you’re trying to assess network quality. Some of your biggest questions are about testing: How quickly are your regression test suites running after network updates?
How frequently do your tests uncover bugs, and how do those bugs get resolved? To gain answers to some of these questions, you contact your test team, who send you the most recent test reports—but you can’t make heads or tails of them, and your questions remain unanswered.
Now, let’s imagine instead that you’re a test engineer, tasked with verifying service and finding root causes for any bugs that crop up. Your tests are all automated, so you run them through an interface and wait for the test results to come in.You can see that only two of more than a dozen use cases failed—which is encouraging—but when you try to figure out what might be happening on your network to cause the failures you find that there’s almost no additional information beyond pass-fail notations for each test. As a result, it takes much longer than it should to find and address root causes.
Neither of these situations is ideal, but the contrast between them illustrates a very real challenge when it comes to reporting. What challenge is that, exactly? The challenge of providing value to both testers and management in a way that’s intuitive and readable.
The Importance of Readability
Now, when we used the word readable above, what were we talking about exactly? Essentially, for test reporting to achieve maximum value, even non-technical readers should be able to look over reports and understand what they mean.
Why? Because otherwise your testing operations will forever exist in a functional silo—with no one but the test engineers themselves able to gain any insight from test results.
This can make it difficult for testers’ colleagues to make ROI calculations for whatever test tools they might be using, just as it can make life more difficult for anyone who simply wants to understand what, exactly, is happening on their network.
Long term, this state of affairs makes it difficult or impossible to devote the right resources to the right tasks at the right time.
One of the most promising methods of generating readable reporting is to opt for keyword-based testing. With keyword-based tests, testers create test scripts using pre-defined keywords that can be swapped in and out as necessary, such that scripts can be reused ad infinitum.
In this sort of framework, you might define two devices for testing (say, “pstn_a” and “ms_b”), then use different keywords (“Dial and Call,” e.g.) to automatically execute different tests.
This obviously has the benefit of making test scripts easier to create and read, but this ease of use also extends to reports: when a test succeeds or fails, a keyword-based framework can provide documentation that describes the whole test case, e.g. “Volte_A calls Volte_B: Volte subscriber A registered in IMS network and under LTE coverage; Volte subscriber B registered in IMS network and under LTE coverage; Subscriber A calls Subscriber B, etc.”
This gives even non-technical users a modicum of insight into the test results, without requiring them to dig around endlessly for technical details.
High Level Test Results
Like we said above, your test results will have to have value for both high level readers and more technical users looking for root causes. While these needs will have to be met simultaneously, they’ll involve two different sets of criteria. For high level users, documentation should make a few things immediately obvious:
- Which test suite was running?
- What is the status of the test suite?
- How many (and which) tests passed/failed?
- How long did each test take?
- What was the total test time?
If you’re privileging readability to begin with, it should be easy to find reports that simply display the appropriate keyword so that users can scan the list of test cases with ease. To increase that ease even further, it can be helpful to have “tags” by which related test cases can be grouped together.
From a management perspective, accessing the information above quickly can be critical to evaluating the current state of your network and network resources.
A high failure rate for tests might suggest a deeper issue in the operation of your network, and the sooner you’re aware of a potential issue the sooner you can devote time and resources to uncovering it.
By the same token, if the tests are taking longer than expected (and thus eating into your ROI), you want to know about that right away—without having to dig laboriously through complex and confusing test documents to figure it out.
Individual Test Cases
Higher ups need to be able to access and understand test results, but at the end of the day it’s the testers themselves who have the most to gain or lose from them. For this reason, your reporting should also offer much more detail than the high-level version with, at most, a few clicks’ worth of effort.
If, for instance, you were performing a suite of regression tests, and as a tester you needed to get some insight into a failed test case (say, a VoLTE subscriber trying to register to the IMS network), you’d want to see every step in the test flow in order to pinpoint the possible reason for the bug. Thus, your reporting would ideally show checks to verify:
- The VoLTE subscriber is not already registered
- The subscriber is within CS coverage
- The subscriber is switched from radio coverage to 4G/LTE
- The user device confirms that the subscriber is now registered.
Based on which of these elements failed, you would have some clues as to what might be happening within your network to cause the error. If, again, you were running this as part of a regression test suite, you might take a look at recent network changes as a starting point.
Based on the tags, you could also see if any related test cases were failing, in order to begin narrowing down potential issues. Once you had made a bug fix, you would run the test again, and a brand new report would (hopefully) move that test case into the “pass” column.
Categories
Tags
Michael Sedlacek
Recent Posts
Interested in our Products ?