05 Jun 2015

Rethinking the Internet of Things

Ron Wilson, technical author, Altera looks at how requirements capture and design should be approached for the Internet of Things.

As the Internet of Things (IoT) cements itself into place as the mandatory next big thing for 2015, more systems architects are taking a hard look at its underlying concepts.

As they look, these experts are asking some hard questions about simplistic views of the IoT structure: the clouds of sensors and actuators attached to simple, low-power wireless hubs, linked through the Internet to massive cloud data centres.

Almost every stage of this description is coming into question. Some experts challenge the notion that a swarm of simple sensors is the right way to measure the state of a system in the first place. Sensible Sensing

New IoT view

The obvious way to measure the state of a system is to identify the state variables, find points at which they are exposed so sensors can measure them, and put sensors there. Then pull the sensor data together at a hub. But the obvious way is not necessarily the best way. All those sensors and links make such an approach expensive to install and inherently unreliable.

Another way is to pick a few critical variables that can be sensed remotely and then used to estimate the state of the entire system. This process may be intuitively obvious, or it may involve some serious mathematics and use of a state estimator such as a Kalman filter. One example on the more intuitive side involves security cameras, traffic, parking, and the idea of the smart city.

A typical smart-city scenario might involve lighting management, parking management, traffic control, and security. A traditional IoT approach would put a light sensor on each street lamp, buried proximity sensors in traffic lanes near each intersection and each parking space, and security cameras at strategic locations well above ground level. Each of these sensors would have a wired connection to a local hub, which in turn would have a wireless link to an Internet access point—except for the light sensors, which would use wireless links from the tops of the lamp posts to their hubs.

There is another way. An intelligent observer, watching the video from a few of the security cameras, would easily see which street lamps were on, which parking spaces were occupied, and when traffic signals should change. The result is not only huge savings in total cost of ownership, but improved reliability and additional safety and security features that would not have been available from a swarm of simple sensors (Figure 1).

 IoT camera to collect data

Figure 1. A single camera may be able to collect more data

Similar concepts can work on other sorts of systems. State estimators using computable mathematical models of systems can compute the position of a motor shaft from readily-accessible motor winding currents and voltages, or the state of a chemical reaction from external observations. In general, there appears to be a growing trend to favour a small number of remote sensors—often cameras—supported by computing resources, rather than a swarm of simple sensors with their attendant power, connectivity, reliability, and security issues.

That changes everything

The idea of substituting heavy computing algorithms—such as convolutional neural networks or Kalman filters—for clouds of simple sensors has obvious advantages. But it creates problems, too. Designers seem to face a dilemma. Do they preserve the spirit of virtualization by moving the raw data—potentially multiple streams of 4K video—up to the cloud for analysis? Or do they design-in substantial computing power close to the sensors? Both approaches have their challenges and their advocates.

Putting the computing in the cloud has obvious arguments in its favour. You can have as much computing power as you want. If you wish to experiment with big-data algorithms, you can have almost infinite storage. And you only pay for roughly what you use. But there are three categories of challenges: security, latency, and bandwidth.

If your algorithm is highly intolerant of latency, you have no choice but to rely on local computing. But if you can tolerate some latency between sensor input and system response, the question becomes how much, and with how much variation. For example, some control algorithms can accommodate significant latency in the loop, but only if that latency is nearly constant. These issues are obviously not a concern when the amount of data moving to the cloud is small and time is not critical. But if a system design requires moving real-time 4K video from multiple cameras to the cloud, the limitations of the Internet become an issue.

Virtualization and its discontents

The requirements of our cloud-centric system extend through the network and into the data centre, where profound change is already under way. As more compute-intensive, event-triggered applications descend upon the data centre, server and storage virtualization become almost mandatory. The data centre must be able to run an application on whatever resources are available, and still meet the outside system’s service-level requirements.

There is another difficult point as well. Some algorithms resist being spread across multiple cores on multiple servers. They depend on single-thread performance, and the only way to make them go faster is to run them on faster hardware.

The endpoint of this thinking is a cloud data centre that appears entirely application-specific to the user, and entirely virtualized to the operator. To the user if offers access to processing, accelerator, and storage resources configured to serve her algorithms. To the operator, the data centre is a sea of identical, software-definable resources.

The fog

We have been discussing how to provision IoT applications that could do all their computing in the cloud. Now let’s look at applications that, for safety, bandwidth, latency, or determinism reasons, cannot. These applications will require significant amounts of local computing and storage resources: either at the sensors—as in vision-processing surveillance cameras—in the hub, or in the Internet switches.

Today these resources are being designed into proprietary sensors and hubs as purely application-specific hardware, generally using lightweight CPUs supported by hardware accelerators. Imagine virtualization seeping through the walls of the data centre, spreading out to engulf all the diverse computing, storage, and connectivity resources of the IoT. You could locate an application object anywhere: in the cloud, in intelligent hubs or smart sensors, eventually even inside the network fabric (Figure 2). You could move it at will, based on performance metrics and available resources. The system would be robust, flexible, and continually moving toward optimal use of resources.

 IoT Internet fog

Figure 2. The fog of the Internet

Many steps must be taken to reach this vision. Applications must be in a portable container, such as a JAVA virtual machine or an Open Computing Language (OpenCL™) platform, for example, that allows them to execute without alteration on any of a huge range of different hardware platforms. The notion of application-directed networking must extend beyond the data centre, into a version of the Internet that can support QoS guarantees on individual connections, and eventually computing tasks within nodes. And somehow, all of this must be made secure.

IoT security

The need for ironbound security is already recognized inside cloud data centres. No one is going to let you store their data if they think you might let a third party modify, read, or snoop it. But preventing those things from happening, in a dynamic, virtualized environment where no one really knows the state of the total system is a daunting challenge.

As cloud computing becomes fog computing, these security requirements expand into hubs and, eventually, into the public network, adding another computing load to both hub SoCs and SDN data planes. But with everything from biometric data to autonomous-vehicle control messages traversing the Internet, today’s attitude toward security would be catastrophic.

We have seen how a close examination of the IoT undermines the simplistic picture of a myriad of simple things all connected to the Internet. But moving beyond this view brings profound changes to the Things, their hubs, the structure of data centres, and the Internet itself. There may be no stable waypoints between where we are today and a fog-computing, fully secure new realization of the network and its data centres.

Page 1 of 1


About the author

Ron Wilson is editor-in-chief of the System Design Journal publication from Altera. He has close to 40 years of experience in the electronics industry , and prior to joining Altera has held a variety of editorial positions with EE Times, serving as both editorial director and publisher of ISD Magazine and has written and edited for EDN Magazine, Computer Design, and Embedded Systems Design. Wilson holds a B.S. in Applied Science from Portland State University.

Altera Corporation is a pioneer in the field of programmable logic solutions. Altera offers a host of products including FPGAs, SoCs with embedded processor systems, CPLDs, and ASICs in combination with software tools, intellectual property, embedded processors and customer support to provide high-value programmable solutions to over 16,000 customers worldwide. Altera was founded in 1983 and is headquartered in San Jose, California, employing more than 3,000 people in over 20 countries.

Most popular articles in Wireless technology

  • Wi-Fi Spectrum Needs and the role of Wi-Fi assurance
  • The future of Bluetooth: where do we go now?
  • Designing for the IoT: Overcoming the antenna challenge
  • IoT Device Management: Why You Need it and How it Works
  • Implementing ZigBee Light Link for Lighting Control
  • Share this page


    Want more like this? Register for our newsletter






    What to consider when incorporating an intelligent touch screen display into your product Markku Rihonnen | 4D Systems
    What to consider when incorporating an intelligent touch screen display into your product
    Gone are the days of rotary switches, push buttons and seven segment displays. Incorporating touch screen displays into embedded designs is the trend, accelerated of course by consumer adoption of the smartphone.









    Radio-Electronics.com is operated and owned by Adrio Communications Ltd and edited by Ian Poole. All information is © Adrio Communications Ltd and may not be copied except for individual personal use. This includes copying material in whatever form into website pages. While every effort is made to ensure the accuracy of the information on Radio-Electronics.com, no liability is accepted for any consequences of using it. This site uses cookies. By using this site, these terms including the use of cookies are accepted. More explanation can be found in our Privacy Policy