Technologies for the smartest machines in the IIoT era

Inside Machines: Essential technology developments within distributed, connected machines with real-time capabilities include heterogeneous processing architecture; hardware, software design and development; and readiness for Industrial Internet of Things (IIoT). See five architecture examples, diagrams.
By Greg Brown August 1, 2016

Figure 1 illustrates a simplified version of the basic heterogeneous architectures. Courtesy: National InstrumentsWhat does it mean to make something smart? Consider the scope of this industry catch phrase and what it means for a machine to be smart and to provide advantages for increased use of Industrial Internet of Things (IIoT) design strategies.

Perhaps it means a machine is smart enough to sense everything a developer can dream up. Maybe this machine has the most precision or multiple sensor types to feed some new control and/or predictive maintenance algorithm. How about vision-guided motion? Or multiple protocol communications and translation? Are the smarts local or are they distributed control? Where does machine learning fit? Does smart mean the machine works in the domains of Industrie 4.0, the IIoT, Made in China 2025, Made in India, and more? Must a machine make new business models possible to be labeled as smart? 

Heterogeneous processing architecture

In an advanced, smart machine, a fast and modern CPU is needed to process multi-axis motion and vision algorithms. This is a challenge today because the fastest processors are developed for server-type workloads and their complex pipelines, caches, etc., are optimized for throughput instead of deterministic, real-time responses. Multicore (up to four) and many-core (more than four) performance are the primary focuses of increasing processor performance.

To take advantage of these, the tasks must be partitioned into parallel control operations. Real-time operating systems and software libraries should provide thread-safety to lessen the burden of programming complexity of multi-threaded applications associated with multi/many-core development. At the lower to mid-range of the CPU performance scale, there is room for increased frequency and hence single thread performance, at the expense of area and power consumption.

For the most advanced, smartest machines with fast input/output (I/O) and the need for hard, deterministic real-time response in the sub ┬ÁSec range, even the fastest processors cannot handle the entire range of performance requirements. The solution is to use a heterogeneous processing architecture. 

Five architecture examples

A heterogeneous architecture provides different processing engines for optimizing several aspects of smart machine control as well as bringing additional benefits to the machine builder. Five examples of heterogeneous architectures combine:

  • CPUs with digital signal processors (DSPs)
  • CPUs with a general-purpose graphics processing unit (GPGPU)-a GPU can be used for more than rendering as it can be programmed to do algorithmic processing by an end user
  • CPUs, DSPs, and GPGPUs
  • CPUs with field-programmable gate arrays (FPGAs)
  • Application-specific IP blocks implemented on/in any of the above.

These basic architecture descriptions can exist in discrete components or be integrated into a System-on-Chip (SoC). Some devices have additional IP blocks implemented that are more specific to an application use that are relevant to machine builders such as a DSP device with a PWM module. CPUs with FPGAs, in particular, have become a popular heterogeneous architecture in the last years. This provides the user with three processing elements because FPGA device vendors have created powerful DSP building blocks in their devices. The FPGA is designed to provide the capability to ensure sub ┬ÁSec-nSec level, hardware determinism, and reliability as well as full customization, flexibility, field upgradeability, and bug fixes without hardware spins (like upgrading software).

Cost, performance, power consumption

Users can select a variety of performance, cost, size, power consumption, and I/O counts for the FPGA and CPUs to tailor the implementation to the needs of the machine. It is also possible to design a scalable hardware platform that uses common software on the CPUs and IP blocks in the FPGA. A few years ago, SoCs integrating CPUs with FPGAs (and DSP building blocks) were developed. Figure 1 illustrates a simplified version of the basic heterogeneous architectures.

A common question is, what is the best architecture? The answer is the one that best helps address technology, customer, and business requirements. And that depends on the situation and the application. One guideline is to focus on the architecture that provides long-term benefits and helps address multiple generations of control needs. Investments into architectures can have substantial payoffs, but changing them often can result in wasted efforts.

Very few performance benchmarks compare these various architectures because of the complexities involved. Most often, CPU-specific benchmarks (CoreMark, SPECint are examples) or FPGA feature-specific numbers are provided. What is needed is a workload-centric framework with a consistent methodology of comparing the relevant metrics in an end application-centric manner. One of the few that have attempted such for heterogeneous architectures is the NSF Center for High-Performance Reconfigurable Computing (chrec.org). Researchers created a framework to analyze various heterogeneous processor architectures to try to create an "apples-to-apples" equivalent analysis of these very different implementations.

For the control application, researchers used several relevant workloads such as remote sensing, image processing, motion control, trajectory generation, and communications. 

Hardware, software design, development

A challenge to effectively using any heterogeneous architecture is the complexity of the hardware and software design and development. The collection of disparate and nonintegrated tools selected may complicate the workflow and design data management and create risk. Much attention is often given to the development side, which, in this context, means creating applications on top of the working hardware and run-time software stack; whereas, less design attention has been provided.

Designing a deployable, custom hardware and software system for advanced industrial control requires many deep and broad capabilities, tools, processes, and methodologies. As technology has progressed and taken advantage of Moore’s law, the complexities, challenges, and risks of custom embedded design for high-performance systems have increased as have the expense of the tools and required designer expertise and specialized knowledge.

Semiconductor device speeds have increased in the core and the I/O rates to where signal integrity is a challenge at the board level, often requiring dedicated tools and understanding. Advanced packages with more than a 1,000 pins with small pin pitches (0.5 to 1.0mm), close-proximity decoupling capacitors, and other challenging design attributes require advanced, 10- to 16-layer boards, creating design, manufacturing, and certification challenges. The number of power rails and power management also add to the complexity of selecting the power supply and distribution. 

Real-time Linux, a large-footprint operating system (OS), is becoming more popular with the increased processing capability, availability of megabytes to gigabytes of memory at low cost, the need for networking, and the desire to do more in software. Rolling your own version of the OS with the associated driver development and performing robust hardware/software validation on the heterogeneous architecture require an additional set of tools and expertise. For real-time Linux, there are organizations and projects such as the Linux Foundation and the Yocto Project working to standardize and ease its adoption. 

Robust, long-lived design

For a design to be robust and survive for a long time in the field, additional design, testing, and validation must be completed for power consumption, thermal management, shock and vibration, and other applications. For example, thermal simulations can point out hot spots that, if not properly managed, will create future quality problems. The component selection, board design and layout, and the thermal management, such as heat sinks and airflow, are interrelated and must be properly accounted for to ensure a design can withstand the rigors of industrial deployment. Various certifications, depending on geography and deployment environment, must also be performed.

Figure 2 illustrates how a deployable, custom hardware and software design and development requires many complex processes and sub-processes. Courtesy: National Instruments

Intertwined with all this is a complex supply chain of device vendors and tool suppliers. For a large population of the control design original equipment manufacturers (OEMs), the volumes of machines are considered in the low (10s to 100s) to medium (100s to 1,000s) categories. Most of the vendors and suppliers do not provide direct support or service. This leaves many OEMs in these categories mostly on their own, which makes it even more challenging to use the latest technology available, especially for industrial-rated components with long-term (greater than 10 years) availability. 

A way to address this set of challenges is to use a deployable, rugged, industrial product from a proven, reputable supplier with a long-term industrial history and focus. These products can range from chassis-type products, such as programmable automation controllers (PACs), industrial PCs (IPCs), and industrial controllers, to single-board computers (SBCs) and system on modules (SOMs).

The provided software stack is just as important. Consider if the stack is open with source code so the appropriate customizations can be made. Know if the deployment-ready, run-time software is a core competency of the vendor. Find out if the hardware and software have been tested, validated, and proven in thousands of deployments. There is a difference between vendor-supplied reference software and deployment-ready software. Understanding this is important to properly accounting for the true total additional design effort and associated testing and validation needed.

Development environments

On the development side, great progress has been made in the industry to develop higher level design languages and tools that abstract some or all of the underlying complexities inherent in heterogeneous architectures. The advantage is that the focus can be on differentiated application development and not on low-level, detailed design. With advances in model-based design, higher level programming languages, high-level synthesis tools, and graphical system design, there are various tools and methodologies to choose from. Some are targeted to abstracting the complexities of multi- and many-core development while others focus on hardware accelerators/IP cores to complete heterogeneous systems/subsystems including I/O. The later often are coupled to various hardware architectures and implementations. The more complete the abstraction and tool flow, the more the tools need to understand the target.

For the tools that generate hardware, most will create HDL IP for the target hardware, usually an FPGA, requiring HDL development and implementation tool flow expertise and knowledge (detailed timing constraints, tools switches, and so on) to complete the design. Few can completely handle the entire design entry to full implementation workflow. 

Figure 3 illustrates a simplified diagram of the various workflows and development methodologies. Courtesy: National Instruments

IIoT readiness

Advanced, smart machines also need to communicate with each other, other equipment, operations technology, IT, and potentially the cloud as the evolving IIoT, Industrie 4.0, Made in China 2025, Make in India, and others take shape. It is important to understand how these are all evolving, know the opportunities, and how the architecture, design methodologies, and tools can take a relevant role. For simplicity’s sake, IIoT is referenced to cover all of these.

Four essential elements for an advanced, smart machine that can be IIoT-capable or IIoT-ready are:

  • Multiple I/O types and expandability
  • Real-time compute capability
  • Real-time communications capability
  • System expandability and flexibility.

Multiple I/O types and expandability are required to add as many types of high-speed and low-speed analog and digital sensors as needed to increase machine capability and visibility into its operations and control. Sensors may need to be designed-in or added to increase intelligent knowledge about machine health, to perform even more precise, high-speed control, to increase throughput, and to add as-yet-to-be-determined capability to adjust for future services-based business models. 

Fast, synchronized motion

Increased compute capability is required to handle the higher speed and precision of the I/O and, as more software is written, to provide more capabilities to address end customer requirements. As more synchronization is required with less jitter and nSec level multi-axis control and triggers, then hard real-time is a necessity. Some of the most advanced machines in applications such as semiconductor laser dicers require 100 nSec or less of synchronized motion control because of the high precision needs. Motion trajectory paths must be rapidly calculated for multi-axis control to achieve high throughput, pushing the limits of today’s technology. Selecting an appropriate heterogeneous architecture can greatly assist in meeting such demanding requirements.

As the increased compute capability at the individual controller level increases, the systems become more distributed. To fully take advantage of the potential of distributed control, high-speed communications must also support real time. To address the IIoT’s future needs, a unified communications network based on the latest time-sensitive networking (TSN) update to standard Ethernet (IEEE 802.1) should be of primary consideration in the near future.

The IIoT era will require a converged network that can run media, control, measurement, and management traffic while guaranteeing timing synchronization, latency, priority/bandwidth, redundancy, and fault tolerance. By using standard Ethernet, costs will be minimized because of the elimination of proprietary hardware and performance increases as standard Ethernet today is in 1 to 100s of Gb rates. Support for existing protocols and proprietary hardware often will be a need so the control system also can operate as a gateway that supports multiple protocols.

Lifecycle costs

Architectures that are flexible and can expand the system are important to control development and product life-cycle costs. A platform-based approach that supports multiple classes of machines from a single-base architecture, means hardware and software can be reused. Heterogeneous architectures can provide the widest range of possibilities. For example, when using an FPGA, hardware post-deployment can be updated to include new features, new standards, increased performance, and more without requiring field service or swap outs. Tools that can be used for remote, deployed system management, maintenance, and debugging need to be a part of the software platform. Such capabilities expand the value of IIoT deployments. Such a platform based on heterogeneous architectures can save total product live-cycle costs.

It is an exciting time to be working in the IIoT era, and many factors should be considered when embarking on defining and building the latest smart machines. Approach the control and I/O architecture with a survey of various solution spaces. Think about how much new development is going to be required to meet the demands of distributed, connected machines with real-time requirements now and in the future. Understand what platform approach should be used and if it will be developed in-house or combine the two. This helps focus hardware and software resources on end-product customization and differentiation. Know what key capabilities and technologies are needed to be an effective participant in the wave of new IIoT opportunities.

Greg Brown, principal product marketing manager, National Instruments. Edited by Chris Vavra, production editor, Control Engineering, CFE Media, cvavra@cfemedia.com.

MORE ADVICE

Key Concepts

  • Embedded architectures can vary widely.
  • Cost, performance, and energy efficiency have improved.
  • Programming, design, and lifecycle considerations have advanced. 

Consider this

Modern methodologies, architectures, and tools have significantly advanced embedded design. Which decade are you in?

ONLINE extra

See related stories about embedded systems linked below.