What is the Future of Big Data in Telecom Testing?

Let’s state the obvious for a second: telecommunications networks produce a ton of data, especially when they’re being tested. For years, much of this data was of little value to telecom operators or testers—but with the emergence of Big Data (and related concepts like AI and machine learning), things are beginning to change. Now more than ever, telecom operators are called to support public entities in the quest to track and gather valuable data in these pandemic times. A telco testing operation with the right tools and right skills can harness the power of Big Data to improve their knowledge discovery, gain new insights into their networks, and bolster network quality.

This shouldn’t come as too much of a surprise. After all, Big Data has been making waves in industries from supply chain management to professional sports. The question is, how can network operators and other businesses within this space effectively collect data and leverage it to improve their testing, their churn rates, and their networks? As with any new technological paradigm, the exact answer will only emerge over time—that said, there are more than a few things we can say with confidence about how the average telco network might change in the era of Big Data.

Data Analysis in (Near) Real-time

One potential challenge that comes with data analysis in the telco sphere is that the information you gather typically has a short shelf-life. You might see a spike in latency times or another QoS metric in a particular week, only to find that the problem has resolved itself before you’re able to correlate it with particular network issues. Ideally, you’d be able to test every interface between each end-point (the user interface on devices, the air interface, a number of core network interfaces, etc.), and by the time you’ve done so it’s there’s not much time left over to analyze the data itself. Simply put, it’s not possible for a human tester or analyst to wade through all of the data being produced by the system-under-test quickly enough to extract timely insights from it.

The solution here isn’t simply a matter of having enough computing power and storage capacity—on the contrary, one of the best ways to fight the complexity inherent in this use case is through streaming algorithms. These are often radically different from traditional statistical methods, using hash functions and suitable random variables to order otherwise chaotic and heterogenous data for analysis. Such algorithms process the data on-the-fly, computing aggregates and KPIs that can be stored in lieu of the raw data, thus providing a legible audit trail for root-cause analysis. The same technique paves the way for things like cognitive network management and intent-based networking, creating an environment in which previously illegible network activity becomes not just readable and accessible but immediately value-additive. Rather than letting your data languish unused, you can turn it directly into network insights that will boost our ROI in the long run.

Intelligent Device Automation

Many of the applications for Big Data that emerge in the coming years are likely to deal with complex intangibles like the link between quality of experience (QoE) and various other network factors. Some applications, however, will be decidedly straightforward. For instance, the same technological paradigms that are powering the rise of hyperautomation are now making it possible to automate new mobile devices in record time. At SEGRON, we’re able to offer our iDA (Intelligent Device Automation) product aimed at exactly that purpose. Using device learning logic, the system is able to explore the ins and outs of any new phone as soon as it comes on the market, learning how to use it in a way that’s comparable to human users but faster and more efficient. The device learning system explores hardware details, operating systems, and menu structure for each new device, creating a configuration file with the settings necessary to include it in a test environment without a ton of extra manual effort.

While the amount of data generated by this process might not qualify for the “Big Data” label, it’s the same principles being applied. And what’s the value-add, you ask? Well, the faster you can automate new devices that are coming onto the market, the more quickly you can incorporate them into your test flows and make sure that any early adopters of those devices get the highest possible QoE on your network. As the 5G era brings an influx of new phones equipped with new radio equipment, the ability to automate devices quickly will likely become increasingly crucial.

Machine Learning

One of the technologies that’s most often associated with Big Data is machine learning. Why? Because machine learning works as something of a catch-all for algorithms that can analyze gigantic caches of data in a way that would be impossible for human testers to grapple with. As such, the ITU (International Telecommunication Union) and other industry organizations and research and advisory firms, have come up with a few different potential applications, including:

  • QoS and QoE analytics (for both objective KPIs and more subjective ratings of services)
  • SLA (Service Level Agreement) monitoring and enforcement
  • Churn Prevention 
  • Fraud detection
  • Network Usage prediction

Of course, capturing the data necessary to performing these kinds of workflows isn’t something that happens on its own. Rather, it’s enabled by streaming algorithms. On the extracted data, one of a number of machine learning techniques can be used for link state estimation, traffic aggregation and classification, and QoS predictions. The added value becomes even higher with the introduction of new network topologies defined by SDN/NFV (software defined networking/network functions virtualization), cell densification, and network slicing. From there, the added analytical and processing power that comes with machine learning can help you begin to improve your tests to localize faults more efficiently, all based on interconnections and dependencies between different elements latent in the data.

The upshot here is that this new technology can help make your service verification faster, more efficient, and more comprehensive. Again, as we begin to move toward the 5G era, this will become even more critical for network operators. As networks get denser and add radically more connections, there will be more data than ever with which to refine your efforts and delight your customers. At the end of the day these new use cases might take any number of different forms—from neural nets to fast decision trees and Bayesian algorithms, each of which boasts different ideal applications. The only wrong way to handle these large caches of data is to let them gather dust on your servers.

Thomas Groissenberger

Thomas Groissenberger

Thomas has more than 25 years of experience in software, testing and management in Europe and the US. He is the Founder and CEO at SEGRON and he is a member of the Forbes Technology an Invitation-Only Community for World-Class CIOs, CTOs, and Technology Executives.

Interested in our Products ?

Scroll to Top
SEGRON Logo Black Blue

Linux IT Engineer

Service summary:

• Understand SEGRON’s technology, product, and IT systems
• Design, set-up, install and transition SEGRON IT services towards Linux cloud deployments.
• Administer and manage applications of SEGRON Linux IT system landscapes
• Act as direct contact person for an external team of Linux security experts and be responsible for
service operations (requests, changes, problems, incidents)
• Provision and maintain Linux clients/servers in production and development environments.
• Solid Python scripting knowledge
• Experience with automation/configuration management using Ansible and YAML playbooks.
• Experience of monitoring tools and statistics – InfluxDB, Grafana
• Strong technical problem-solving skills and experience in IP networking and static routing FTP,
SSH, SMTP, DNS, HTTP/S, DHCP, SMB/NFS, Syslog logging facility
• Solid understanding of operating databases and file systems
• Solid understanding of server deployment, configuration and troubleshooting common issues
• Preparing cost estimates for System security enhancements
• Testing the final security system and updating and upgrading it as needed.
• Responding quickly and effectively to all security incidents and providing post-event analyses
• Monitoring the IT security supplier, cultivating a sense of security awareness within our
organization, and arranging for continuous education
• Remaining up to date with the latest security systems, standards, authentication protocols, and
products
• Representing SEGRON in case of security audits driven by suppliers

Skills & Qualifications

• Interest in digital system landscapes, web technologies and cloud computing
• Enthusiasm for innovation, technology, learning and knowledge
• Excellent customer communication skills
• Team player, willing to help, flexible
• Excellent knowledge of English

Others:

• Full time job
• Offered rate: from 35 EUR/hour (depends on seniority and skills level)