System challenges for networked, future-proof controls
Sensors across the value chain will monitor the real-time status of the end-to-end operation to optimize flexible process controls, and location systems in automotive assembly create process visibility and virtualized device controls. This is deemed revolutionary since data sources far outside individual processes might influence real-time controls, but it is revolutionary in another sense, too: These distributed sensor and controls networks represent the "IT-ification" of industrial controls, a significant departure from the hardware-oriented, embedded-software reality of the last several decades.
In this future vision, a centralized decision-making engine will gather and assimilate data from distributed sensor networks and, in turn, disseminate decisions to industrial control systems. In this IT-centric world, it will be essential to maintain all the performance, reliability, and scalability that localized controls have made so straightforward. It won’t be enough to develop sensors and virtualized controls—that is the easy part. The hard part is to make them operate with better than five 9’s reliability, in harsh environments, with all the safeguards in place to make them truly production-critical systems. There is one technology being adopted in automotive assembly that nicely illustrates this challenge and the uphill path that Industry 4.0 technology providers face: Location systems are becoming increasingly popular as a way to gain visibility into assembly processes and also to create virtualized device controls.
Product complexity challenges assembly
A traditional moving assembly line allocates groups of processes to workstations of a fixed length and duration. Every group of processes must be completed within its allocated workstation and therefore within a fixed process time—the "takt" time. For quality reasons, great care is taken to ensure that the right tool executes the right process on the right product in the right workstation. This often is achieved through rigid, mechanical means: Fixed scanners identify the product, limit switches enable and disable tools, and tools may be physically tethered to specific workstations. The whole construct is proven, effective, widely adopted, and completely broken.
In 1913, when Henry Ford invented the modern assembly line, processes could be rigidly controlled: No flexibility is required when the line assembles identical, black Model T Fords. Over the years, engineers created highly elegant processes that allowed some variation in the individual cars assembled on the line, allowing consumers to choose from a number of option packages without violating the fixed takt-time principle of the assembly process.
Today, consumers demand a high degree of personalization, requiring full freedom to choose any combination of options; in parallel, manufacturers strive to control costs by building more cars in fewer factories. The result is that the modern assembly line has to manage trillions of potential variations of one model and also must be configured to simultaneously produce multiple, diverse vehicle types.
Each car on the line is different, and that high degree of variability requires a high degree of flexibility in the way processes are controlled. The fixed workstation is becoming a thing of the past.
Flexibility with virtualization
Quality is paramount as is the need to ensure that the right tool performs the right process on the right product. Rather than physically constraining everything to a rigid workstation, however, a location system can track the relative position of tools and products and create virtual workstations to control devices. In contrast to a tool that is enabled and disabled as a car actuates a limit switch at either end of the workstation, a location system can create a virtual zone which triggers tools as the car is determined to enter and exit the zone.
The process is simple in concept—much like geo-fencing using global positioning system (GPS)—but consequences are far reaching. By virtualizing the control of critical devices, location systems introduce complete flexibility. Manufacturers can tailor workstations in real-time to suit the needs of the product under work: Takt time may vary from one model to the next; workers may temporarily be given more process time to avoid a line stoppage; and processes may overlap or span multiple traditional workstations.
This is a glimpse of the power of Industry 4.0 and an important reminder of what it takes to create distributed, real-time control systems. What is needed to create a production-critical system?
It’s easy to think that a simple concept requires a simple implementation. Main work zone components are a tracking system, which most people can put together using active radio frequency identification (RFID) components and some kind of messaging interface to talk to devices.
Taking a tracking system that works well in the lab and creating five 9’s reliability in an industrial environment is no small feat. Carefully balanced filtering that helps to reject outliers while maintaining agile performance can only be optimized over time spent operating in place, though this is only the start of creating the necessary reliability. Additional intelligence must be built into the filtering algorithms that account for unique RFID propagation challenges in a highly mobile, complex environment.
Given time and experience a tracking system can be optimized for a particular environment and made to operate with adequate reliability in small deployments, such as one workstation. The next challenge is to enable scalability sufficient to manage hundreds of simultaneous processes without compromise in performance. This is not simply a task of applying enough processing power to handle the computing task, but of managing the additional tracking challenges created by large populations of RFID tags. Intelligent filtering applied in layers ultimately can result in a scalable, robust tracking system that answers the requirements of an industrial sensor.
Connecting with devices is a prerequisite for control. An adage comes to mind: The great thing about standards is that there are so many to choose from! When entering the arena of assembly line controls it’s tempting but naïve to think that implementing one or two of the available standards will create turnkey solution results to deploy.
Interface protocol implementations are as fractured and diverse as device control strategies. Control systems certainly must have the capability to interface with programmable logic controllers (PLCs) (often of dated vintage) and with a large variety of manufacturing execution systems (MES).
The MES is the software that orchestrates plant operations, many of which are custom or internally developed by each individual manufacturer. The industrial controls interface strategy is clear-flexibility is the key, with the capability to be rapidly configured to connect with many unique interfaces.
Get the architecture right
At this point in the design, we have a sensing system and a device/MES interface, but where should the logic reside that connects the two?
In a manufacturing facility there are two choices: line-side computing devices managed on the shop-floor or centralized servers managed by IT. In an automotive assembly plant, it’s rare to see any substantial computing capability on the shop-floor. PLCs and tool controllers run embedded code, and each workstation may contain a thin MES client for instructions to workers, but most computing activity managing processes are hosted in a secure, high up-time server facility managed by IT, remote from the harsh line-side environment.
This is the configuration of choice for most manufacturers. In just the same manner as the MES, any software managing inputs from sensors distributed throughout the line, and controlling devices along the line, must be contained within a production-critical IT server environment. This adds another layer of challenge for the location-enabled workspace flexibility system: It must support server virtualization and clustering; it must run on a wide range of operating systems; it must support redundancy either in hardware (mirrored discs, etc.) or software (a hot-backup server); it must have the ability to recover from outage in less than one product cycle (typically less than 60s).
Beyond this, the system must provide management tools to proactively warn of system errors, give visibility to IT managers, and create logs and analytics for troubleshooting. A control system is no system if it cannot be adequately managed.
The final industrial control system requirement is that the controls must operate with low latency so as not to create process issues. It’s important to remember that the tools being controlled are operated by people and for a manual task to be unaffected by latency any actuation delays must be kept under 0.5 seconds.
Numerous challenges might break that threshold: filters that take too long to converge, smart outlier rejection that’s computationally intensive, network delays back to the host server, computational delays on the control logic, high latency third-party interfaces, among others. Without optimization for latency, the work is for nothing. The system will quickly be rejected for introducing more problems than it solves if it causes device actuation delays.
Keeping an eye on the future
Location-enabled control systems described here represent one of many examples where the monitoring and control function is migrating from discrete line-side devices to networked intelligence. As the scale of monitoring and control increases, the control functions are migrating from embedded code to server-hosted systems.
This is particularly true as manufacturers prepare for future adoption of the full Industry 4.0 vision—the end-to-end visibility and networked control system of the future. As the scale increases and control logic migrates farther from the point of application, controls engineers face new challenges of scale and reliability. It’s no longer enough to create islands of sensing and control: The full scope of production-critical, distributed IT systems managing high numbers of low-latency controls must be embraced. Moving forward, control systems must be evaluated not on how they perform in a small, isolated pilot, but how they scale to very large systems in real production deployments.
– Adrian Jennings is vice president of manufacturing industry strategy at Ubisense. Edited by Eric R. Eissler, editor-in-chief, Oil & Gas Engineering, firstname.lastname@example.org.
- Controls must operate with low latency so as not to create process issues.
- Every challenge that is overcome must also be highly optimized for latency, or all the work is for nothing.
- The scale increases, and control logic migrates further from the point of application.
Scalability and customization are at the core of assembly lines.
See related stories about the IoT and Industry 4.0 below.