Decentralized Control

History of centralized control in manufacturing and similar environments is—as with most technological developments—a double-edged sword. The situation is reminiscent of the two-headed animal from the Dr. Doolittle children's stories—the "Pushmepullyou." In this case, should the design shape the process or process dictate the design? Centralization often can mean accessing an...

By Stephen Scheiber April 1, 2004

AT A GLANCE

Centralized vs. distributed control

Applications should govern design

Match networks to the need

History of centralized control in manufacturing and similar environments is—as with most technological developments—a double-edged sword. The situation is reminiscent of the two-headed animal from the Dr. Doolittle children’s stories—the “Pushmepullyou.” In this case, should the design shape the process or process dictate the design?

Centralization often can mean accessing an entire operation (or production line) by one person or a small group of people from a single point. Such complete surrender to one or a few pieces of hardware can be considered putting all your eggs in one basket. If that’s your choice, you’d better watch that basket! (Mark Twain summarized that wisdom.) In a pure centralized design without redundancy, if the server goes down because of a power outage, system failure, or any other cause, satellite processes go down as well. In addition, centralization may not permit a quick response to changing conditions or to equipment differences between one geographic location and another. As a result, many manufacturers have moved in the other direction—distributing control “to the provinces.”

Since microprocessors have become more powerful and less expensive in recent years, companies embed them in remote I/O devices, pushbuttons, sensors, and other components. These “smart parts” can execute control functions on a communications network close to the processes that they control. Recent advances—such as improved human-machine interfaces (HMIs)—allow engineers to carry control-room-type displays directly into the field in the guise of hand-held devices such as personal digital assistants (PDAs).

Look at the application

Yet questions remain: How much decision-making capability belongs at a central point and how much belongs directly in the production line on the manufacturing floor? Where and when do you still need centralized control? Can distributed intelligence benefit from any functions that must remain centralized? In the resulting continuum of solutions, each approach has advantages and drawbacks, and each provides the best answer under certain circumstances. How do you draw the lines?

According to Sam Herb, an automation platform manager for Invensys Foxboro, manufacturers must consider the number of required control loops, process complexity, need for advanced process control and optimization, cost of downtime and the resulting need for redundancy, and the need for security. He contends that large, elaborate applications involving sophisticated fail-safe designs, fault tolerance, or advanced control strategies benefit from centralized control. Also, centralized systems facilitate process validation and documentation in response to regulatory demands, ideal for applications like pharmaceutical manufacturing and pollution control.

At the same time, centralized control requires enormous communication bandwidth to transmit process parameters and other data to the control point and experiences an inevitable lag time between a data point’s generation and the system’s response to it. Tracy Lenz, senior product support engineer for Wago Corp., notes that engineers must carefully calculate anticipated data traffic and design the fieldbus to accommodate it. Decentralized distribution permits monitoring inputs and encoders locally. A decentralized controller can react much quicker to high-speed inputs than can its centralized counterpart, communicating with the main processor only to report that a routine is complete.

Consider, for example, the power-on self-test of a typical control system. If the central facility runs the test directly, then test instructions, measurements, and responses must travel across the network, which can bog down the test and may introduce bus contention and other errors. A built-in self-test accomplishes the same task much more quickly and easily. A single-bit instruction from the central location triggers it (or it can be triggered locally by certain boundary conditions), and the only necessary response is a single-bit, pass-fail. Of course, to be more useful, the self-test could return one of a number of error codes on failure to permit troubleshooting the control system, but even those codes represent only a small fraction of data density required in the centralized case.

Interruption intolerance

Lenz suggests that decentralization also permits a programmed response to a main processor failure or a failure of network communication. Some processes—in pharmaceutical manufacturing, for example, cannot tolerate interruptions. Depending on circumstances, a decentralized architecture can continue a process unabated, initiate a sequential shutdown, or execute a complete but controlled shutdown.

Manufacturers can apply decentralization to older systems as well. Not needing to replace existing hardware and software can lower costs considerably. Lenz describes a multiple-batching arrangement where the main HMI downloads instructions to the decentralized controller, which controls local product batches while reporting data to the network. In this case the main controller serves only as a global-network interface. The local node communicates with the main controller over the network, saving the cost of replacing the existing infrastructure.

Early control systems exhibited primarily master/slave relationships, recalls Gary Marrs, field application engineer for Lantronix Inc. The mainframe, programmable logic controller (PLC), or other controller executed programs and managed all I/O points and communication with remote nodes. Marrs points to three major drawbacks to this approach:

Remote communications were very inefficient. The master sent commands to each remote node and polled it for a response. If one slave node needed to communicate with another slave, it went through the master in “star” configuration. Slow command and response times limited the applicability to real-time operations. Although networks are now much faster, density of data traveling across those networks has expanded at least as rapidly.

The centralized system suffered from fault intolerance. Maintenance and unscheduled downtime consumed time and monetary resources.

Some central designs can be expensive to maintain. Engineers designed centralized systems to house as much control and I/O function as possible. They wired every sensor, motor, switch, and actuator to a control room, resulting in high installation costs. Troubleshooting difficulties then occurred because of the enormous “rat’s nest” of wires, and intermittent problems with electromagnetic interference.

Distributed control addresses these issues. Peer-to-peer (multi-master) architectures permit locating controllers and their I/O points close to the devices being controlled. The system can process real-time control loops locally without burdening the data hub, communicating directly with other controllers (peer to peer) to send or receive data. If one controller fails, the rest of the system can still function.

According to Marrs, the PC revolution has helped to expand distributed control. Before PCs, most PLC and distributed-control system (DCS) vendors chose proprietary protocols to tie users to their products. PC open architectures have helped promote standardization and ease of use while lowering costs.

One limitation that hinders the proliferation of true distributed control is lack of a single network standard. Fortunately, the situation is improving. Standards like OLE offer powerful tools to encourage interoperability. Web servers allow information and data access over the Internet to any computer with a browser. XML and Simple Object Access Protocol (SOAP) permit sharing data over a distributed environment.

‘To Ethernet or not to Ethernet’

With apologies to the “bard,” many will ask at some point: “To Ethernet or Not to Ethernet?”

To address the issue of non-standard industrial protocols, many manufacturers are turning to architectures built around Ethernet—especially as newer versions of the standard permit higher communication speeds. Wago’s Lenz points out that most industrial buildings already have Ethernet capability. Connecting decentralized controllers via Ethernet-based communications eliminates customized fieldbus wiring. Connections can spread out on a large scale within a single building or to other buildings, yet still permit communication with the main controller. If the Ethernet carries too heavy a load and bogs down, the decentralized controllers still function.

Ethernet adds other dimensions to the equation as well. High-powered radio systems can carry communications from the main controller to local devices, as with a municipality’s pump station or water works. Implementations of IEEE standard 802.11 permit quick and easy wireless interfacing, and PDAs allow engineers to hook up to systems directly, even in the field. HMI software can be loaded onto the PDA for control-system access. When automating control of a building, for example, a supervisor can check the status of lights on any floor or change temperature and other environmental conditions from these common palm-sized tools.

Ethernet is a hot topic, addressed in previous articles, but shouldn’t be considered the best or only solution for industrial networking. Doug McEldowney, a manager at Netlinx division of Rockwell Automation, says that “tweaking” Ethernet communications for the factory floor may not provide the best results.

The architecture of industrial networks, such as DeviceNet, Interbus, Profibus, and others specifically addresses factory applications, and may therefore represent a better choice. McEldowney also suggests that managers’ perceptions that Ethernet is less expensive than other solutions may spring from a misunderstanding of what it takes to make the system work. Ethernet requires active components (switches). Although not difficult to set up, a switch-based architecture is somewhat different from what most managers experience when implementing Ethernet in an office environment. Many network architectures designed specifically for the factory floor, on the other hand, permit simpler installation with a lot of flexibility. Also, most Ethernet uses a tree/star topology for point-to-point communication. Again, although adequate in office applications, this approach may slow operations on the factory floor. The bus (trunkline/dropline) topology of the other solutions can be more efficient.

He notes that Ethernet, like industrial networks, varies in protocols. By the same token, he believes that Ethernet advances will expand its capabilities to accommodate additional applications. For example, the IEEE 1588 Time Sync Protocol standard will permit Ethernet to synchronize distant nodes precisely. Such synchronization, in turn, will allow applications to be even more widely distributed, yet maintain tight integration across the network between the central location and the field.

What’s next?

Many people believe that the trend toward decentralization will continue unabated until small autonomous and intelligent nodes collaborate seamlessly. They will collectively constitute the “control system,” with only a supervisory unit overseeing the whole process. If that’s the future, we aren’t there yet. You can push me, and I can pull you, while the technology continues to encourage more efficient process designs.

Online extra

” Ethernet Hits Real-Time…Really “

” The Wireless I/O World “

” Pouring Thought into the Process “

Author Information

Steve Scheiber, ConsuLogic Consulting Services, sscheiber@aol.com , is a consulting editor with Control Engineering.


Related Resources