Buy or build your process control system
One DCS supplier makes the case for buying a comprehensive control system to run your process unit rather than a do-it-yourself PLC-based approach.
The industry debate over the virtues of distributed control systems (DCSs) versus programmable logic controllers (PLCs) has been going on for at least the past four decades. However, as the technologies have evolved, so has the discussion. The choice used to be more clear-cut, but as the functionality differences narrow and price points align, the arguments for and against each system have become increasingly murky.
Central to understanding the argument between the two is grasping the fundamental differences between the two platforms. For instance, DCS architecture originated from an overall system approach with a focus on distributing control on a network so that operators could monitor and interact with the entire scope of the plant. Coordination, synchronization, and integrity of process data over a high-performance and deterministic network are at the core of DCS architecture.
PLC architectures, on the other hand, focus on very flexible and fast local control, and recent advancements in PLC technology have added process control features. When PLCs and HMI software packages are integrated, the result looks a lot like a DCS but is still very much a do-it-yourself (DIY) approach, meaning engineers must oversee the assembly of their system from the ground up. While this is a flexible approach to control, the DIY option usually comes with increased technical risks in networking and performance as well as added costs that are not always immediately apparent.
In the past, a DCS was typically more expensive to purchase than PLC-based systems, and many plants had lower demands in terms of production rates, yield, waste, safety, and regulatory compliance than what they experience today. Thus, PLC-based systems were appealing because they offered a lower capital investment and, from a functional point of view, performed adequately. But times have changed. Across the global marketplace, the demands on manufacturing companies have risen while the purchase price of a DCS has decreased. As a result, many control system engineers, maintenance managers, and plant managers are taking a fresh look at the trade-offs between a DCS and a PLC-based control system architecture as they plan their automation capital expenditures.
With all that in mind, there are several issues to consider when evaluating a DCS versus building a DIY distributed control system using a PLC-based architecture.
Good network performance starts with proper network planning, which can only be done with an intimate knowledge of the communication behavior of each network node and the protocol used to carry network messages. Major process automation suppliers have taken care of this requirement. They provide best practice information so the user starts with a sound network design for the control system. Contrast this to the DIY world where the application engineer might well be the first to put a particular network topology together.
Once the network planning and installation are complete, the next step is gauging how the network performs. The same network topology can be subjected to a wide variation in communication traffic based upon the amount of data acquisition, alarm reporting, history compilation, peer-to-peer messages, and backup tasks that go on at any given time, which can be taken care of through comprehensive maximum topology testing.
Assuming that the user has planned and installed his or her network, the plant has reached its maximum production capacity, and everything is working as expected, a common challenge is maintaining that smooth network operation.
One solution is to implement fault-tolerant Ethernet (FTE) at the outset, a redundant industrial Ethernet networking technology utilizing inexpensive off-the-shelf components to provide a high-availability solution. FTE continuously cares for the process control network by providing ample network diagnostics that are tracked and reported as a part of the base DCS.
Additionally, the plant must qualify the functionality and performance of service packs and hot fixes before they are loaded into the production system. Seasoned network engineers know that every single device on the network needs to behave properly as a part of a functioning network community, as one bad actor can spoil the performance of the entire network.
Good process control is built upon reliable and repeatable execution of the control strategy. The process controllers that are a part of classic DCS architecture have fundamentally different operating philosophies than found in a PLC. While the PLC runs relatively quicker, the process controller favors repeatability. This means that the control strategy runs on fixed clock cycles—running faster or running slower is not tolerated. Repeatable control on every cycle is designed to support repeatable quality, repeatable yield, and repeatable results for the plant.
Clock cycles are not the only secret. Other system services are also designed to give priority to solving the controller configuration. For instance, controller-generated alarms can be throttled if they are interfering with control and recovered later when process disturbances slow down. This can only be managed effectively by tightly coordinating the control generating the alarms, as well as the alarm and event subsystems that collect, store, and report those alarms. Again, a system approach from the onset is central to the operation of a DCS.
Suppliers of HMI software packages typically boast about how easy it is to design graphics for the operator. But designing graphics, no matter how impressive, is not how a plant makes money. Imagine a process control environment where one doesn’t need to build graphics because they’re provided pre-built.
In a system where the control and operator environments are designed and built together, often 90% of what is needed to run a process plant can be made standard. Some DCS platforms can provide hundreds of standard faceplates, group displays, and status displays that are vital to safe and efficient plant operation and are provided out of the box.
Object-oriented function blocks are used primarily to specify the properties of any given user function. By creating function blocks with a complete set of parameter-based functions, the user can develop and fine-tune control strategies without designing control functions, while ensuring that all necessary functions are available and documented as configurable selections. The application engineer simply assembles the blocks into the desired control configuration with minimal effort. A self-documenting, programming-free controller configuration is what makes the DCS architecture efficient to engineer and troubleshoot.
As an example, let’s look at a commonly used process control function—the PID block. Using a DCS-style global data model, all aspects of the PID function can usually be accessed via a single configuration screen where various algorithms that have proven the test of time are available for easy selection. Parameters used for alarming, trending, and history in the HMI are easy to find and configured in one location. It’s no longer necessary to configure these parameters to populate the HMI configuration.
In the 20- to 30-year lifespan of an automation system, it’s important to consider how often typical users will need to expand or modify their systems and how often they will want to add a new control technology to them.
In the world of DIY, it’s possible to find all the applications needed to run a plant merely by looking through catalogs from PLC and HMI vendors and placing an order. Licenses, DVDs, downloads, and other usable content will begin to arrive shortly after that, providing an array of materials. However, it’s easier to order one model number and receive everything needed at once via the same content. One license can supply all the controlware, a data historian, trend objects, business integration software, and graphics needed to run a process plant. With the capabilities of DCS architecture, all control applications load correctly, are guaranteed to be the correct version, and are tested to work together.
When the DIY DCS is pieced together, multiple data models can spawn multiple data elements representing the same piece of information. And when piece parts are brought together to form a system, the various data models must be synchronized and maintained. A burden exists on application engineers and system administrators to accomplish this task.
In the world of the DCS architecture, the entire data model has been conceived to cover all parts of the system. Hence, one data owner can provide a piece of information to any application or service anywhere in the system. The issue here isn’t the number of databases. The key is having a single data model so that no matter where a data element resides, it can be used by any element of the architecture and that particular data element is never duplicated. A comprehensive data model doesn’t necessarily mean one database, but it does mean only one location for any given element of data.
The comprehensive nature of DCS architecture has long been a favorite for batch automation projects. More than anywhere else, batch requires careful coordination between phases, units, recipes, formulas, and so on. Even classic DCS architecture has been challenged to provide a complete packaged solution because of all the various and diverse elements in a batch environment. For this reason, many batch automation projects have resorted to a myriad of packages brought together to form the solution.
However, the batch-data model is no longer as daunting as it once was, and the various aspects of a batch automation solution can now be captured in a single DCS data model. For instance, all elements needed for batch management and execution are run in the process controller, or a redundant pair of controllers when robustness is desired. This means that there is no longer a need for a PC operating as a batch server. Because all batch elements are handled in the controller, batch execution is faster, cycle times are reduced, and throughput is increased. Further, operators learn one consistent environment for alarms, security, and displays so that fewer errors are made. From an engineering and maintenance perspective, the advantage is in learning and supporting one tool with no duplication.
Rarely are today’s process plants run by a single brand of controller. That’s why classic DCS architecture also serves to bring third-party devices into the same data model employed by the DCS. This incorporation of existing controllers means that operators can view information from various brand controllers in a consistent fashion.
It is also important to choose the control solution that will allow a seamless addition of enterprise solutions onto the control layer. Because information-rich applications will most likely be expected right around the corner, it is important to consider elements like manufacturing execution systems (MES), asset management, reporting packages, statistical process control (SPC), downtime tracking, or a variety of other enterprise layer solutions.
Control strategies need a thorough “ringing out” before they are deployed to control the actual process. Because process control is so focused on repeatability, it is necessary for a simulation environment to run the control strategy without alteration. Timing is essential in process control, and a simulator must replicate the process execution timing in a faithful manner.
With that in mind, DCS suppliers offer advanced simulator technology to support improved performance throughout the lifecycle of a plant. This ranges from off-line use in steady-state design simulation, and control check-out and operator training, to online use in control and optimization, performance monitoring, and business planning.
Thorough process improvement relies on good process data, which means that history collection must be coordinated with the plant automation system functioning so it does not interfere with more urgent control requirements. Yet, if it becomes necessary to suspend the collection of history, the history must be recovered since incomplete history is unacceptable. Plants need a reliable solution for archiving history data, and also for retrieving it for use in trending and quality analysis.
With this in mind, most current DCS platforms now include robust process history functionality built in directly, enabling engineers and plant management to analyze performance of the entire operation from a single location. Redundant data collection mechanisms also ensure speedy fail-over to a secondary collector upon loss of a primary.
Making the decision
Every plant, of course, will have unique requirements when it comes to automation and control, and neither a DCS nor PLC will be a catch-all solution for every facility. Ultimately, specific applications and operational needs must be considered carefully when determining which technology is most appropriate for process control. There is a growing case to be made, though, for the value of a DCS, even in smaller applications. Taking into account the possible issues examined above can give operators and engineers a blueprint of DCS and PLC capabilities and provide deeper understanding of what to consider when choosing between the two.
Tim Sweet is solutions manager for small systems for Honeywell Process Solutions.
Read the story of a water utility that built a DIY system and liked it: Water Treatment Plant Upgrades Automation, Nov. 2011.
- Events & Awards
- Magazine Archives
- Digital Reports
- Global SI Database
- Oil & Gas Engineering
- Survey Prize Winners
- CFE Edu