Decentralizing process control and information management

The “D” in DCS has always stood for distributed. While the technological drivers behind this networking strategy may have changed, do the basic concepts of distributed intelligence still apply?

By Peter Welander, Control Engineering December 9, 2012

In a new training video from the Fieldbus Foundation, Chuck Carter, the instructor, explains how control in the field operates, running a flow loop using a valve and flowmeter. Once he gets the system set up and operating, he cheerfully announces, “I’ll demonstrate one more item, which is one of the advantages; just for fun, I’m going to kill your host system.”

Carter acknowledges that to an operator, losing the host would be a huge problem, but it makes a critical illustration of a key advantage of distributed intelligence: Fewer parts of the process depend on a single central point of failure. Even with the host system down, the valve and flowmeter continue to function properly, keeping the flow at the desired setpoint. When the host is restored, all is as it was. The system works because the control valve and flow transmitter have sophisticated data processing and control capabilities built into them. The user can take advantage of this or not as the situation requires.

Having this type of redundancy is certainly an advantage, but it is only one aspect of distributed control. Ever since the earliest DCS implementations, distribution has been a key element of networking and control strategy. Early on it was largely defensive: Networks were slow and centralized processing capacity was limited, so systems of any size had to spread the work around to get it done in a timely manner. Over time the situation has changed. Processor capacity is unlimited for all practical purposes, and networks are faster. Still, distributed intelligence remains a major element of control strategy.

Much smarter field devices

It’s probably not all that farfetched to suggest that the computing power included in the transmitter of a current Coriolis flowmeter could rival the main processor of an early DCS. Similarly PLCs, PACs, temperature loop controllers, and all sorts of small smart devices have grown in sophistication by leaps and bounds. When combined cleverly, the need for central processing falls off quickly.

In some situations, such as the example just discussed, the intelligence is resident in the field device’s transmitter. Such capabilities have grown over the years from basic functions of linearizing sensor output or secondary corrective measurements, to including abilities to derive additional process variables from available data. Of course such additions complicate device configuration requirements, so vendors are working to create approaches that minimize such overhead without losing new functionality.

In other situations, field devices may work with small-scale local processors that also serve as an I/O interface. These units can perform functions that would rival a full-scale DCS of an earlier decade. "A brain, or I/O processor on the I/O rack handles most signal-related operations,” says Ben Orchard, Opto 22 systems engineer. “These can include latching and high-speed counting, watchdog timers, thermocouple linearization, and PID loop control. This frees the controller to handle programming, communications, math, and other functions. Running tasks like PID loops at the I/O level also provides fault tolerance; if communication with the controller is lost, for example, the PID loops continue to operate independently and maintain setpoints."

Smarter networks

Distributed intelligence only works when there are networks that can coordinate everything. While the actual control functionality can be spread out, analysis and reporting still need to be centralized. Use of Ethernet is growing rapidly in industrial applications for many reasons, one of which is its ability to support distributed infrastructure. This means that networking providers that are more identified with enterprise systems and the conventional IT space are moving into industrial networks.

This is not without some concerns since many of those responsible for keeping a process unit running 24/7 feel that IT administrators simply don’t understand how serious the mandate is for such availability. Paul Taylor, global alliance manager at Cisco, thinks industrial users don’t realize that they’re not unique in this respect. “What we find, and people don’t always like this, is that networks in industrial environments are pretty much the same as other places,” he says. “Everyone likes to think their industry is different, but the availability of a plant network is pretty much the same as the availability of a financial network on Wall Street. If that goes down for a couple of seconds, how much money is lost there?”

He understands the extent of that requirement changes in some enterprises. If the e-mail system goes down for 5 or 10 minutes, which is typical in most companies, that’s probably where that feeling comes from. Taylor adds, “What we’re seeing is that the availability and the network designs for all of these other types of environments can translate over into the industrial environment quite easily. Their requirements are quite similar when it comes to that level of networking reliability.”

Sophisticated safety

While process control systems have become more distributed, safety systems still remain essentially monolithic, depending on large triple-redundant safety PLCs to control that functionality. While it is critical to avoid single points of failure, some companies have realized that the intelligence used for control can also be applied to safety without losing the desired functional separation. Those responsible for maintaining safety systems appreciate having some of the same diagnostic capabilities associated with the control system.

A number of companies are talking about some concept of integrated safety, which seems to suggest that the safety system and basic process control system are somehow blended. This isn’t how it works since that idea violates the basic principle that a safety system should be able to function fully independently of the control system. A more accurate characterization is that the control system can use information from the safety system so long as that information flow is just one direction.

“What we did with electronic I/O, is that we moved the ‘smarts’ for I/O, even including some of the control capabilities, farther into the field,” says Mark Nixon, director of research in R&D for Emerson Process Management. “The same thing is happening with safety systems. We introduced a controller that runs in the field and sits at the head end with the safety CHARMs that can do monitoring and shutdown control.”

Nixon believes that true distributed intelligence is a recent phenomenon. Technologies now exist that allow for fast control through integrated but highly distributed networks. The result of moving safety intelligence more to the device level is similar to the example discussed at the beginning of this article: Safety functions do not have to depend on centralized control. Sensors and actuators that are close to the problem can act independently when necessary.

What should still be centralized?

While the number of functions that can be dispersed is growing, there are still some areas that are best left centralized, or at least they should appear to be centralized even though they aren’t. A few that should probably not be distributed include:

  • Device configuration information
  • Patch management, and
  • Data collection and historians.

One of the areas that benefits from aggregation is data collection. For most analytical purposes, the more information, the better, but that doesn’t mean it actually has to be in one place.

Jack Gregg is the director of Experion product marketing for Honeywell Process Solutions, and he says that turning data into information is a key element of effective control architecture. “Our historians reside in the server, and the server represents a cluster and a certain number of controllers that can be associated with that,” he explains. “These clusters of servers are designed to work as one large system. The way they communicate with each other is what we call distributed server architecture (DSA). Say I build a point in one unit and I call it ‘FIC-101,’ and I load it into that unit in that controller. Now the operator is using it and trending it, the server is historizing it, alarms are being generated for that cluster, and all of that information is going into one server and presented to the operator. Now, if I’m sitting in a corporate office and I want to know what’s going on in that unit, it’s all transparent. The server will go out and find whatever information I’m asking for. I won’t necessarily know where it’s coming from, nor does it matter.”

The scale of such deployment can become truly massive if large companies begin to aggregate data from multiple plants around the world. Gathering historical data from production units totaling hundreds of thousands of tags can be done, but such systems spanning multiple sites can be challenging to manage. Invensys Operations Management calls these galaxies, which is a global name space and manages all that collected data. Depending on the extent of the project, these can cover large geographic areas.

Rob Kambach, product manager of app servers and system platforms for Invensys, explains how it works: “When you create a project with our system, you create a galaxy, which is a repository of your project, and within that you can have objects, and those automation objects can talk to devices. Automation objects can have scripts, symbols, security, historization, and basically whatever you need to build your system. Automation objects can then be deployed to platforms which can be a host PC, and there can be hundreds of these host machines. You might have 10 in one plant, 10 in another plant, and so on, to gather information and display it on an HMI. It all has the capability to talk to InTouch. The trend that we see is that instead of growing larger and larger, people want to have smaller galaxies that they can manage more easily, but they still want the ability to talk to other sites and exchange information securely.”

Kambach says that companies with widespread assets can use a system like this to facilitate centralization of specialized domain expertise. Internal consultants can serve multiple locations from a central office via this system, minimizing the need to spend time on the road.

Don’t forget security

One of the biggest challenges and fears for users when networks become this complex is security. In this context, that involves two main considerations: traditional cyber security (protection from outside invaders) and internal regulation of who can get where on such large networks.

Controlling the ability to control a process needs to be in the right hands. As Honeywell’s Gregg puts it, “The data is there, and you need to secure it and make it available only to the people that need it. You certainly don’t want someone at the enterprise level adjusting setpoints. That’s the wrong level and the wrong user to be making changes to the process. When managers do planning and scheduling and want to do a change of grade to a process, they will send instructions down to the operators in the control room, and the operators will make the change there as opposed to the businesspeople doing it. They set the targets, but the changes are executed at the control room. You do have to lock down the information in a process control system, and only make it available to the people that need it. Don’t make mission-critical information available to someone who is not controlling the mission-critical process.”

Spreading intelligent devices throughout the process makes for cyber security challenges as well. Many PLCs, PACs, I/O collectors, and the like have Ethernet access but don’t necessarily have much internal cyber security protection. That means the network itself must be the primary line of defense. IT network administrators can help because they’ve had to deal with the same problem in a different form.

“There’s not a lot of difference between that and an enterprise environment where we have bring-your-own-device,” says Cisco’s Taylor. “BYOD means anybody can walk into their company with their iPad, iPhone, Android device, or whatever, and hook it up to the company’s network. It will work. You can get your e-mail on your iPad, but it’s your iPad, and you take it home at night. Maybe your kids download games from the Internet, and all the rest of it. When you start thinking about intelligent industrial end devices with that approach, you realize that the security has to be done by the network, because you don’t know what’s being attached to the network. You don’t know where those devices have been or what they’re carrying. You must put security in the network.”

He says that one major advantage of industrial networks is that when you get down to the device level, network traffic should be predictable in volume, in the nature of the data moving around, and what talks to what. If a video camera begins sending e-mails, you know there is a problem.

Successful network design is a balance of creating appropriate protections without unnecessary barriers to desirable network traffic. Taylor suggests, “A good example of that is if you have your laptop computer and you take it down on the plant floor because you need to reprogram a PLC. When you connect it to the Ethernet switch at the PLC to access it, the network will let you do it. But if you try to access that same PLC from your office, it won’t let you do it because you’re plugged into the wrong place. Those are the types of security aspects we have in the network. That type of security policy is one you can apply across the entire network. That’s exactly what the IT department does when you walk in with your iPad. You don’t get access to everything, but you get access to some things that you need to keep you efficient and productive.”

The capabilities of distributed intelligence have grown enormously in recent years. When applied appropriately, they can improve the performance of process units and improve the availability of information for decision making. 

A contrary opinion: The case for modern monolithic control 

There are companies that see distributed intelligence as a step in the wrong direction. Joe Ottenhof, a regional sales manager for Beckhoff Automation, sees the change to distributed intelligence essentially as a defensive tactic that reflects users’ dissatisfaction with typical network performance in a process manufacturing context. He contends that approach does not fix the root problem.

“The reason there’s so much intelligence in field devices is that the bandwidth of the networks is so low,” he explains. “You can’t get feedback from other devices in a timely manner. The issue comes with trying to maximize the overall efficiency and the overall performance of the facility. In order to do that, you really need a centralized picture of what’s going on throughout the whole operation. The limiting factor that most control system vendors have is the speed of the network and the amount of the data that flows over the network from those field devices. We would rather have dumb sensor devices in the field, meaning devices that just communicate parameters. To connect all those devices together, hundreds and even thousands of them, we use a high-bandwidth network rather than a low-bandwidth network, and connect all those through a simple but rugged infrastructure back to a central processing unit that performs the control. This is Beckhoff’s model in discrete applications.”

The mechanics of this approach can work because the power of CPUs is now basically unlimited, and high-speed Ethernet-based networks (100 MB/sec) such as EtherCAT combined with simple field devices allow sub-millisecond cycle times in most applications. Ottenhof admits that this approach isn’t for everybody. This strategy would not be practical for a refinery or large-scale chemical plant, but there are many process applications that are more discrete in nature where it could work well.

One critical underlying concept is that Beckhoff sees problems with dependence on hardware. “When you have centralized control, you are implementing in software what others implement in hardware,” Ottenhof says. “Our approach is to implement as much as possible in software, and eliminate hardware as much as possible. The idea is that hardware becomes obsolete quickly, especially on the computing side. Software is much more easily maintainable at much more reasonable cost.” 

Peter Welander is a content manager for Control Engineering. Reach him at pwelander@cfemedia.com.

Key concepts:

  • Intelligent devices distributed through a network can carry out processing tasks individually while remaining integrated.
  • Making this work depends on effective network design and implementation, often borrowing ideas from IT networks.
  • Process control, data compilation, and even safety systems can benefit from this approach.

Go online:

Visit the companies mentioned in this article:

www.beckhoffautomation.com

www.cisco.com

www.emersonprocess.com

www.honeywellprocess.com

https://iom.invensys.com

www.opto22.com