How to navigate the future of IIoT systems

It’s critical to understand which IIoT connectivity technologies to use for each application in the IIoT space.

By Stan Schneider, PhD May 11, 2017

The Industrial Internet Consortium (IIC) recently published the Industrial Internet Connectivity Framework (IICF) after two years of analyzing the IIoT technologies. The IICF includes the insights and strong opinions from many experts, including those from the top industry consortia, many companies, and most important standards.

The most surprising conclusion: the IIoT is big. Really big. It’s so big that the technologies don’t really overlap. The impression that there is overlap is mostly just confusion.

Designers may think to choose any standard, including the data distribution services (DDS from the Object Management Group), OPC Unified Architecture (OPC UA), MQ Telemetry Transport (MQTT), or oneM2M and succeed. But this implies the IIoT connectivity solution space overlaps, as in Figure 1.

The reality is very different. The IIoT covers many industries and with very different use cases including industrial control, robotics, autonomous cars, aerospace, manufacturing, process control, central power generation, and distributed renewable energy. These are only a few of the hundreds of companies and thousands of applications in the IIoT space.

In fact, the IIoT space is so big that the technology options rarely, if ever, overlap. Today’s architecture challenge in the IIoT space is therefore not one of choosing among overlapping technologies that may each be able to reasonably solve a problem. The challenge is understanding the technologies, comparing the intended use to the application, and choosing the one that best addresses the particular challenge the applications faces. Sure, stretching a technology all out of proportion may make anything work. But that will result in a lot of extra work and a suboptimal design. If you look at a more realistic map of the situation, it looks more like the sparse Venn diagram in Figure 2 than the overlapping one in Figure 1.

The lack of overlap in the IIoT space actually makes an architect’s task much simpler. The real problem isn’t choosing between similar options; it’s understanding the very different options and overcoming biases. The IICF directly addresses this. 

How to choose technology for the IIoT

Let’s take this process a bit further. It’s possible to ask a few questions for each technology option and quickly narrow the choices. These questions may somewhat oversimplify the problem, but they are a great starting point. The IICF identifies four potential "core connectivity standards" DDS, OPC UA, oneM2M, and RESTful HTTP. The first three are analyzed below. RESTful HTTP is well understood, so it’s not analyzed here. MQTT is also examined because of its wide awareness, even though it doesn’t qualify as an IICF "core connectivity standard" because it doesn’t have a standard typing system required for interoperability. 

DDS

Here are five questions to answer to decide if you need DDS:

1. Is it a big problem if your system stops working for a short time?

2. Have you said either "millisecond" or "microsecond" in the last two weeks?

3. Do you have more than 10 software engineers?

4. Are you sending data to many places, as opposed to just one (like to the cloud or a database)?

5. Are you implementing a new IIoT architecture?

If the answer to three out of the five questions was "yes," DDS is needed.

DDS is the standard that defines a databus. A databus is data-centric information flow control. It’s a similar concept to a database, which is data-centric information storage. The key difference: a database saves old information that can be searched by relating properties of the stored data. A databus manages future information by filtering by properties of the incoming data. Both understand the data contents and let applications act directly on and through the data rather than with each other. Applications using a database or a databus do not have a direct relationship with peer applications.

With knowledge of the structure, contents and demands on data, the databus can manage the dataflow. It can,for instance, resolve redundancy, eliminating repeat updates and managing multiple sources, sinks and networks. The databus can control Quality of Service (QoS) like update rate, reliability and guaranteed notification of data liveliness. It can look at the updates and optimize how to send them, or decide not to send them at all. It also can discover and control and secure data flows, offering them to applications and generic tools alike. This accessible data greatly eases system integration and scale.

So how does this satisfy the five questions?

1. Since it’s directly controlling flow, a databus does not require servers. So, there’s no single point of failure. The downtime required to reboot a server and remake connections unexpectedly is never necessary. Without direct relationships with peers, redundancy is transparent. So, you can easily have failover applications, sensors and even networks. If the application is managing a thermostat or checking if there’s milk in the fridge, these things aren’t worth it. But, if the software is responsible for someone’s breathing, or the stability of the Western power grid, or a carbot traveling at 180 mph down a future freeway, even short interruptions can’t be tolerated.

2. Since the databus has full control over how data flows, it can send information directly between peers in times measured in milliseconds or microseconds. DDS can use multicast intelligently when available. It knows delivery deadline requirements and can measure if the system is meeting delivery times. So, it can warn applications if the network (or anything else) can’t handle the needed flow rates. Currently available performance on a LAN supports feedback loops with a 30-microsecond delivery window. These metrics will improve with new real-time networking, such as the evolving IEEE Time Sensitive Networking (TSN) effort.

3. Teams of programmers must control interfaces between modules. The databus specifies a full data model. All connectivity frameworks do this to some extent, but the databus specification is much more expressive than others. It includes not only type information, but also QoS like deadlines, sensor availability and flow rates. So, the interfaces between teams are no longer just captured on paper or header files. The interfaces are defined carefully and then enforced at runtime. The databus can even manage the evolution of those interfaces, allowing modules, for instance, that use newer and older versions of an interface to interoperate. That’s critical for a practical large IIoT system. DDS is a powerful software integration framework.

4. A databus controls flow between many complex applications. It really shines where there’s a mix of fast and slow components, when careful filtering can make the overall flow manageable, when multiple field-based components need that data, and when those flows must be guaranteed. When only trying to capture sensor information to send it to the cloud for analysis, there are simpler solutions, like MQTT.

5. Finally, a databus requires that you build a truly new architecture. While most implementations have legacy subsystems to be integrated, a databus design is not best used to optimize an existing design. A databus is best used to build a new generation or a new type of system. DDS is used in more than 1,000 designs; most all are building something new rather than optimizing something old.

Most databus systems don’t need all five of these properties. But, three of the five is plenty to make a databus design truly compelling. 

OPC UA

OPC UA technology targets device interoperability. Before OPC UA (or its predecessor OPC), applications simply accessed devices directly through proprietary application program interfaces (APIs) provided by device vendors. Unfortunately, this meant that applications became dependent on the particular device they controlled. Worse, higher-level applications such as human-machine interfaces (HMI) had no easy way to find, connect to or control the various devices in factories.

OPC UA divides system software into clients and servers. The servers usually reside on a device or higher-level Programmable Logic Controller (PLC). They provide a way to access the device through a standard "device model." There are standard device models for dozens of types of devices from sensor to feedback controllers. Each manufacturer is responsible for providing the server that maps the generic device model to its particular device. The servers expose an object-oriented, remotely-callable API that implements the device model.

Clients can connect to a device and call functions from the generic device model. Thus, client software is independent of the actual device, and factory integrators are free to switch manufacturers or models as needed. So, OPC UA provides the connectivity needed to drive the system. Note that the device model also provides a level of "semantic" interoperability, because the device model defines the generic object APIs in known units and specified reference points.

Determine if OPC UA should be used by answering the following questions:

1. Are you in in discrete manufacturing?

2. Are you building a device that will be integrated by industrial engineers or technicians, rather than software engineers?

3. Will your product be used in different applications in different systems, as opposed to one (type of) system where you control the architecture?

4. Have you said the word "workcell" in the last two weeks?

If the answer to the majority of these questions is "yes" then OPC UA is likely a good choice. Why? Let’s look at how the technology fits these use case indicators:

1. OPC UA is well-positioned for discrete manufacturing. The German initiative Industrie 4.0 recommends it. Industrie 4.0 is very focused on manufacturing, in contrast to the IIC that works on IIoT technical system architecture across verticals. In a sound bite, Industrie 4.0 is about making things, while the IIC is about making things work. This is another good example of the bigness of the space. In the market, DDS has few applications in discrete manufacturing, while OPC UA has few outside.

2. OPC UA is a software architecture. However, those using and choosing it usually target users who are not software engineers. In fact, one of the reasons for its popularity in manufacturing is that there are few software engineers in manufacturing settings. System integration in manufacturing is usually done between devices, not software modules. OPC UA has very helpful device models that aid in interoperability between device manufacturers. It is not designed as a powerful software integration environment for teams of programmers.

3. OPC UA has a powerful system discovery mechanism called an "address space." It builds an object hierarchy covering all the devices and subsystems at runtime. This can be rolled up to a site-wide server, for instance, and the system connected to a site HMI. This dynamic system building is very useful for providing similar functionality, such as historian storage or HMI viewing to very different applications. It is appropriate where your users control the system design, not you. On the other hand, a software architect that needs to define the system architecture may feel constrained by lack of system modeling and the simple "peek and poke" flavor of the data interchange.

4. Most OPC UA systems end up in workcells. A workcell is a stand-alone subsystem, usually incorporating 20 devices or so. OPC UA, and especially the industrial-integration software that supports it, targets workcell integration. The address model and object-oriented nature directly support a hierarchy of these workcells. Users of the other standards rarely characterize their use cases as "workcells." 

OneM2M

To determine if you should use oneM2M, consider these questions:

1. Do you know what "ICT" stands for, and does it describe what you do?

2. Is the cellular network your primary connection technology?

3. Are your target applications largely composed of moving parts?

4. Can the components of the system tolerate intermittent connections and loosely-controlled latencies?

5. Will the system leverage services provided by a communications provider such as a telephone company?

These questions differ in character from the questions about the previous technologies. OneM2M results from cooperation among many mobile wireless providers. It targets networks of mobile devices that communicate mostly or only through the base-station infrastructure.

The following points examine why oneM2M is implied by these questions:

1. Surprisingly, most target users of DDS and OPC UA cannot even correctly define the name of the industry as information and communication technology (ICT). There is, of course, overlap. But, if you consider yourself in the ICT industry, then you need to consider oneM2M since it was designed for that industry.

2. The core design of oneM2M is to define services that mobile devices can use to cooperate and integrate. If you are going to use those services, plainly, you need to connect to them. They will be running in the platform layer (cloud) connected mostly through the cellular data infrastructure. Other technologies also use the IP traffic over the cell network, but they usually also heavily leverage LAN, local wireless or WAN networking technologies in their designs.

3. There is a potential future market for 5G wireless integration of fixed assets such as manufacturing cells. However, this technology is still years away. OneM2M shines for mobile assets. One especially powerful aspect is that oneM2M abstracts differences in protocols to those devices. Thus, it can integrate different ways to connect to similar devices.

4. Wireless, mobile systems are not reliably connected. Thus, applications must not fail when communications are offline for a few seconds or minutes.

5. OneM2M system designers assume the cloud in their designs. They heavily leverage the "three tier" architecture. Again, the core of oneM2M is the standard services layer. You can’t use those services if you are not using a telephone company or its partners. 

MQTT

MQTT is a simple protocol designed mostly for the "data collection" use case. It does not qualify as a "core connectivity standard" per the IICF guidelines, because it has no standard type system. Thus, it can communicate only opaque data types, not typed data structures. Without a type system, it cannot offer a standard ability to interoperate at the "syntactic" data-structure level.

Nonetheless, MQTT enjoys significant awareness. Because of its simplicity, simple questions about your system will help determine if it’s appropriate:

1. Do you think of your application as data collection?

2. Is there little device-device communications?

3. Is interoperability not a consideration?

4. Do you have many small devices?

5. Is software a minor challenge?

Again, if the answer to three of these questions is yes, consider MQTT. The following are the reasons why:

1. The first "T" in MQTT stands for "telemetry," or data collection at a distance. This is its main use case.

2. MQTT is designed as a hub-and-spoke design. It doesn’t support direct inter-device communications, so it’s hard to develop code that must do that.

3. Without a type system, MQTT applications are not interoperable with any other applications. The only way to build interoperability is to somehow share types outside of the standard. Most all MQTT applications are standalone systems.

4. MQTT is by far the simplest technology considered here. If you have many small devices that are simply connected, simple software can handle your challenge.

5. On the other hand, MQTT offers almost nothing to ease software development. There is only one QoS setting (reliability). It has no defined services. It offers no data or device modeling. All of the software will then have to be written from scratch, which is only practical for a simple software challenge. 

Working together

Much of the IICT is dedicated to an architecture for integrating these technologies. This is critical for the emergence of the "Internet" part of the IIoT. The reference architecture requires standards-based core gateways between core connectivity standards as shown in Figure 3.

Someday, there will be integrations, such as manufacturing systems integrated with transportation and power. Sophisticated autonomy software will reconfigure workcells, creating a bold new world for component device vendors. Wireless 5G systems will interoperate with freeway controllers and autonomous vehicles. Wireless 5G may even directly control factory devices, eliminating wiring in manufacturing.

However, designers should consider the vastness of the space. Today, there are few concrete needs to bridge the light years between connectivity systems. That doesn’t mean the industry isn’t responding to the obvious need. For instance, a recent demonstration at an IIC testbed shows a bridge between DDS and OPC UA. Long term, these integrations will allow bigger systems to combine technologies. For now, designers must understand the vast differences between technologies and choose the one that best fits their problem space.

Stan Schneider, PhD, CEO of Real-Time Innovations Inc., and member of the Industrial Internet Consortium Steering Committee. Edited by Emily Guenther, associate content manager, Control Engineering, CFE Media, eguenther@cfemedia.com.

MORE ADVICE

Key Concepts

Understanding the IIoT space and IIoT systems.

Determining which technology to use for each application/project.

How to choose the right technology for the IIoT space.

Consider this

Once the proper technology is chosen, what is the next step in implementing a true IIoT solution in the industrial space?