segron logo

What to Look for in Test Automation Reporting

Let’s say you’re in a management position at prominent European network provider, and you’re trying to assess network quality. Some of your biggest questions are about testing: How quickly are your regression test suites running after network updates?

How frequently do your tests uncover bugs, and how do those bugs get resolved? To gain answers to some of these questions, you contact your test team, who send you the most recent test reports—but you can’t make heads or tails of them, and your questions remain unanswered.

Now, let’s imagine instead that you’re a test engineer, tasked with verifying service and finding root causes for any bugs that crop up. Your tests are all automated, so you run them through an interface and wait for the test results to come in.

You can see that only two of more than a dozen use cases failed—which is encouraging—but when you try to figure out what might be happening on your network to cause the failures you find that there’s almost no additional information beyond pass-fail notations for each test. As a result, it takes much longer than it should to find and address root causes.

Neither of these situations is ideal, but the contrast between them illustrates a very real challenge when it comes to reporting. What challenge is that, exactly? The challenge of providing value to both testers and management in a way that’s intuitive and readable.

The Importance of Readability

Now, when we used the word readable above, what were we talking about exactly? Essentially, for test reporting to achieve maximum value, even non-technical readers should be able to look over reports and understand what they mean.

Why? Because otherwise your testing operations will forever exist in a functional silo—with no one but the test engineers themselves able to gain any insight from test results.

This can make it difficult for testers’ colleagues to make ROI calculations for whatever test tools they might be using, just as it can make life more difficult for anyone who simply wants to understand what, exactly, is happening on their network.

Long term, this state of affairs makes it difficult or impossible to devote the right resources to the right tasks at the right time.

One of the most promising methods of generating readable reporting is to opt for keyword-based testing. With keyword-based tests, testers create test scripts using pre-defined keywords that can be swapped in and out as necessary, such that scripts can be reused ad infinitum.

In this sort of framework, you might define two devices for testing (say, “pstn_a” and “ms_b”), then use different keywords (“Dial and Call,” e.g.) to automatically execute different tests.

This obviously has the benefit of making test scripts easier to create and read, but this ease of use also extends to reports: when a test succeeds or fails, a keyword-based framework can provide documentation that describes the whole test case, e.g. “Volte_A calls Volte_B: Volte subscriber A registered in IMS network and under LTE coverage; Volte subscriber B registered in IMS network and under LTE coverage; Subscriber A calls Subscriber B, etc.”

This gives even non-technical users a modicum of insight into the test results, without requiring them to dig around endlessly for technical details.

High Level Test Results

Like we said above, your test results will have to have value for both high level readers and more technical users looking for root causes. While these needs will have to be met simultaneously, they’ll involve two different sets of criteria. For high level users, documentation should make a few things immediately obvious:

  • Which test suite was running?
  • What is the status of the test suite?
  • How many (and which) tests passed/failed?
  • How long did each test take?
  • What was the total test time?

If you’re privileging readability to begin with, it should be easy to find reports that simply display the appropriate keyword so that users can scan the list of test cases with ease. To increase that ease even further, it can be helpful to have “tags” by which related test cases can be grouped together.

From a management perspective, accessing the information above quickly can be critical to evaluating the current state of your network and network resources.

A high failure rate for tests might suggest a deeper issue in the operation of your network, and the sooner you’re aware of a potential issue the sooner you can devote time and resources to uncovering it.

By the same token, if the tests are taking longer than expected (and thus eating into your ROI), you want to know about that right away—without having to dig laboriously through complex and confusing test documents to figure it out.

Individual Test Cases

Higher ups need to be able to access and understand test results, but at the end of the day it’s the testers themselves who have the most to gain or lose from them. For this reason, your reporting should also offer much more detail than the high-level version with, at most, a few clicks’ worth of effort.

If, for instance, you were performing a suite of regression tests, and as a tester you needed to get some insight into a failed test case (say, a VoLTE subscriber trying to register to the IMS network), you’d want to see every step in the test flow in order to pinpoint the possible reason for the bug. Thus, your reporting would ideally show checks to verify:

  • The VoLTE subscriber is not already registered
  • The subscriber is within CS coverage
  • The subscriber is switched from radio coverage to 4G/LTE
  • The user device confirms that the subscriber is now registered.

Based on which of these elements failed, you would have some clues as to what might be happening within your network to cause the error. If, again, you were running this as part of a regression test suite, you might take a look at recent network changes as a starting point.

Based on the tags, you could also see if any related test cases were failing, in order to begin narrowing down potential issues. Once you had made a bug fix, you would run the test again, and a brand new report would (hopefully) move that test case into the “pass” column.

Search

Interested in our Products ?

Scroll to Top
Segron logo black blue

Senior SaaS System Administrator

Technical Skills :
  • Oversee the sysadmin related tasks in our SaaS infrastructure (partially cloud based, partially bare metal)
  • Daily operation and maintenance of the system
  • Analysing and resolving incidents
  • Follow and help improving the incident and change management procedures
  • Design procedures for system troubleshooting and maintenance
  • Incorporating base OS updates and security patches
  • Ensure that systems are safe and secure against cybersecurity threats by raising change requests where potential threat is possible
  • Performing SW updates for the Segron SaaS SW stack (distributed architecture with clusters)
  • Configuring solutions like reverse proxy, firewalls, etc.
  • Building tools to automate procedures & reduce occurrences of errors and improve customer experience
  • Tutoring & coaching newcomers & less senior experts in the team
  • Interworking with the architects and IT admins of Segron to have the SaaS procedures inline with the Segron processes
Non-technical skills:
  • We are looking for a self-motivated, self-improving individual with a highly independent mindset and open and straightforward technical communication to help us to improve and maintain our cloud infrastructure of our powerful end-to-end testing solution ATF (Automated Testing Framework)
  • 3+ years hands-on experience with operation and monitoring of cloud / linux systems
  • 3+ years of hands-on experience with network devops elements: configuring routers, switches, networks
  • Hands-on experience with running live systems with infrastructure as a code mode of operation
  • Specific knowledge which brings direct advantage: Docker, Docker Compose, Grafana, Prometheus, Ansible, Debian Linux OS administration, Security
  • Experience in building and maintaining distributed systems (incl. redundancy, resiliency, load-balancing) is welcome
  • Excellent knowledge of English
Location :
  • Place of work: Bratislava (partially home office possible)
  • Rate: from 30 EUR/hour (possible higher rate, depends on experience)
Segron logo - The Next Generation of Active Testing
Segron logo black blue

CI/CD Senior Developer

Technical Skills :
  • A senior role with a proven expertise in software development, cloud computing, DevOps, and CI/CD
  • Experience in planning, designing, and overseeing the CI/CD strategy and architecture on the level of organization
  • Ability to tailor testing strategies which define and follow the best practices, standards, and policies for the software delivery process
  • Hands-on experience in creating and managing CI/CD pipelines and workflows (PaaC)
  • Ability to evaluate and recommend the best tools, technologies, and methodologies for the CI/CD implementation
  • Prior hands-on experience working with different CI/CD toolsets (Jenkins, Bitbucket, GitLab, artifactory, Ansible ..)
  • Proficient with DevOps tools API automation capabilities
  • Proficient with Atlassian Tools (BitBucket, Jira, Confluence) and agile SW development methodologies
  • Familiar with cloud patterns and best practices
  • Familiar with web performance best practices
  • Comfortable working in cloud DevOps ecosystem
  • Comfortable working with Linux platforms
  • Initial working experience in SW development is an advantage.
Non-technical skills:
  • Effective communication with technical as well and business stakeholders
  • Self-motivating, self-improving mindset
  • Ownership of relevant industry certificates is a plus
Location :
  • Location: Bratislava, Slovakia (with hybrid flexibility)
  • Rate: from 30 EUR/hour (possible higher rate, depends on experience)
Segron logo - The Next Generation of Active Testing
Segron logo black blue

Test Automation Engineer

Job description, responsibilities:

  • ATF system configuration, integration, operations & maintenance in customer environments.
  • Building tools to automate procedures & reduce occurrences of errors and improve customer experience.
  • Hardware Verification, Testing and Preparation within the Staging Process.
  • Contribution to customer and service partner technical support across multiple accounts by sufficiently managing priorities and deadlines for own work.
  • Segron Laboratory equipment configuration and maintenance support.
  • Hardware order and logistics support.
  • Problem analysis of ATF issues, troubleshooting and fault correction.
  • Interface towards SEGRON Development Team in case of product or software issues.
  • Interface towards the SEGRON Technical Sales Team to support planned activities.
  • System and Integration documentation and guidelines.
  • Perform root cause analysis for production errors.
  • Deployment of software updates and fixes.
  • Ability to work in a team environment serving multiple global customers.
  • Willing to travel for 3-5 days onsite deployments
Requirements/ Skills:
  • Excellent knowledge of English
  • Operating Systems: Linux, Windows, MacOS
  • Good Knowledge of Containers and Virtual Machines
  • Telco experience welcome
  • Python or other scripting experience or knowledge preferable
  • Educational Qualification: Computer Science/Engineering or work experience equivalent
  • Work Experience: 3-4 years preferred

Others:

  • Full time job (employment)
  • 3 days onsite, 2 days home office
  • Offered salary: from 1800 Euro (depends on seniority and skills level)
  • Variety of financial benefits
  • Place of work: Bratislava
Segron logo - The Next Generation of Active Testing
Segron logo black blue

Senior Python Developer

Technical Skills :

  • A solid, experienced SW developer with at least 10 years of experience in active SW development in different programming paradigms
  • Minimum 5 years of professional Python development experience
  • Master or college degree from Computer Science, Mathematics or STEM domain
  • Well educated in design and programming patterns that increase software’s efficiency and readability.
  • Very good analytical and problem solving skills.
  • At least three skills out of the following 4 skills are requested:
    • Microservices based architectures (Docker containers)
    • Linux
    • Ansible
    • Robot  Framework
  • Comfortable with sysadmin and DevOps skills (Ansible, YML files, Network Programming, IP protocols, designing and developing proxy servers for different protocols – example: streaming, integrating and compiling third party libraries on Linux (Debian))
  • Proficient with Atlassian Tools (BitBucket, Jira, Confluence) thorough understanding of Git and version control best practices
  • Familiar with cloud patterns and best practices
  • Familiar with web performance best practices

Non-technical skills:

  • Ability to work under pressure
  • Ability to abstract and explain your work
  • Strong understanding of Agile development process and experience working in an agile team
  • Strong communication skills with both technical and non-technical stakeholders
Location :
  • Bratislava, Slovakia (with hybrid flexibility)
  • Rate: from 35 EUR/hour (possible higher rate, but depends on experience)
Segron logo - The Next Generation of Active Testing