5 Keys to a Successful IoT Testbed

Up to this point, the most common use cases for the IoT have continued to be industrial: tracking pallets as they move through a warehouse, monitoring inventory levels, analyzing machine usage, etc. But with the advent of 5G, consumer applications for this technology are going to become much more prevalent.

Especially as the speed of over-the-air data transfers become comparable to sending the same information over a wire. To handle this inevitable influx, the 3GPP has been setting standards and protocols for NB-IoT, with a focus on Low Power Applications (LPWA) over licensed spectrum bands.

As of right now, a critical mass of telco operators and other IoT users have adopted the 3GPP’s standards, making it the closest thing to a consensus choice for operating this technology. These standards, however, are just the beginning.

Many telco operators and equipment manufacturers are going to have to develop and operate robust IoT testbeds in order to keep pace with the rapid rate of technological change. To that end, here are a few keys to running a successful testbed for this emerging technology.

1. Consider Mobility

One of the most significant challenges for telco testers working with IoT devices is also one of the technology’s most valuable features for consumers: mobility. Picture a Fitbit or a smart watch: These devices are constantly transmitting data as the users go about their daily business.

As they move about their city or town, they log distance travelled, locations, and other information that users can review later on their smartphones or home computers. This is fairly typical functionality—but it raises important questions for testers:

  • What happens if the device can’t connect to the network?
  • Can the device fall back on legacy networks, or does it have local storage (such that it can send the information again later)?
  • How quickly does the device battery drain when it’s away from a charging station?
  • What happens if the application loses connection to the application server?

These are critical questions for any IoT deployment, but because they’re based on mobility, it’s difficult to test them in a lab setting. Testers might consider using attenuators and RF matrixes to simulate changes in LTE signal strength like the ones a subscriber on the move might experience.

2. Prioritize Security

Often, it seems like IoT deployments treat security as an afterthought, prioritizing innovation first and then trying to append a layer of security on top of the existing infrastructure. This is, to be blunt, a recipe for disaster.

Whether you’re building out new IoT applications in your test lab or simply verifying acceptance of new devices on your network, you’ll need to include security-related KPIs and testcases as part of your standard test suites.

If you treat data security, encryption, and access control like afterthoughts, they’ll feel like afterthoughts to users—who may seek out devices, applications, and networks that pose less risk.

3. Analyze Results on a Protocol Level

As a tester working with new technologies, you have a slightly different set of concerns than you might have otherwise. If you’re rolling out a new HSS database to handle subscriber information, your goal is to make sure that the whole data collection and maintenance process function from end-to-end.

If it does, you can be reasonably confident about the migration. If there are gaps or errors in the workflow, end-to-end tests will typically help you to pinpoint the issue and resolve it. With an IoT testbed, on the other hand, end-to-end tests aren’t always enough.

Instead, you might gain some value by going beyond end-to-end and examining the signalling and protocol data produced by the system-under-test. Doing this by hand can be overly complex and time-consuming, but within the context of an automated testing solution you can incorporate this data into your final test reports.

In this way, you can get a much earlier read on any issues that may be developing, and you can gain a clearer understanding of exactly what is happening on a protocol level as you develop new use cases for IoT technology.

4. Test 24/7

In the section above, we hinted at the potential value of an automated telecom test solution integrated into your IoT testbed’s infrastructure. Analyzing signal traces is a potentially value-additive use case for such a solution—but it’s far from the only one. To wit, automation can also power 24/7 testing within your testbed. Why is this valuable? For a number of reasons:

  • 24/7 testing increases test velocity, which in turn helps you to integrate cutting edge technology like the IoT into your network more quickly.
  • Tests that can be run 24/7 can also be limited to off hours when engineers aren’t working, saving testers the trouble and inconvenience of waiting for low traffic load in the system-under-test before testing.
  • Round-the-clock testing makes it easier to incorporate regression tests into your standard test suites. This is critical for testbeds in particular, because changes that could affect network connectivity and other factors are being made constantly.

Since time-to-market is of particular importance with new technologies, the ability to test round-the-clock can be a major factor in giving you an edge over the competition.

5. Make Your Findings Readable and Accessible

This advice is true not just for IoT testbeds—or even testbeds in general—but it certainly bears mentioning in this context. No matter how innovative your technology is, or how thorough your verification passes are, your testbed’s value will be severely limited if users elsewhere in your organization can’t access and understand the results.

For issues of access, it’s helpful to make sure that your testing efforts aren’t conducted in a silo. Make sure that you’re storing documentation where it’s accessible to those who need it, and that it’s in a format that people can work with.

As for readability, it’s worth considering the value of something like keyword-based tests. In these kinds of workflow, tests are executed via readable, pre-defined keywords—and the subsequent reporting automatically leverages those same keywords.

This makes the test results easy to understand for technical and non-technical users alike, meaning that the impact any experiments conducted in y