Select the Best Process Control
When designing a new process, it is important to exercise some decision-making restraint. One critical area to hold back is selecting an advanced process control technology. The choice of PID, model predictive control, or something else should not be decided until the project has been qualified and reasonable objectives have been established.
When designing a new process, it is important to exercise some decision-making restraint. One critical area to hold back is selecting an advanced process control technology. The choice of PID, model predictive control, or something else should not be decided until the project has been qualified and reasonable objectives have been established. Overlooking this basic principle has caused companies to waste a lot of time and money.
When selecting an advanced process control technology, three perspectives are important:
Process operating objectives; and
System and application security.
Comparing these perspectives to the major process control technologies covered in this series provides a fitting conclusion.
First, all basic instrumentation has to be in good shape. Faulty transmitters and final actuators are at the root of most process control problems. Given good basic instrumentation, the major obstacles to higher control performance are variable process characteristics and interacting process variables.
Variable process gains and dynamics challenge every controller. Both typically vary with production rate and product quality set points. Often, these problems can be solved with the basic compensation techniques discussed in the earliest articles in this series. If interaction is not also an issue, these basic techniques are probably the best approach.
Structures for dead time compensation can be integrated into traditional control schemes, and some PID algorithms offered by DCS vendors have dead time control modes. However, the performance of a system using dead time compensation degrades rapidly as the difference between the dead time adjustment and actual dead time in the process increases.
When interaction among the process variables is also part of the problem, the advanced techniques of feedforward and decoupling control become a likely choice. However, these techniques can quickly be overwhelmed by larger problems, processes with complex dynamic responses, long dead times, or constraint limitations. In these situations, model predictive controls will provide superior control performance.
Model predictive controllers (MPC) do depend ultimately on an accurate process model to obtain good control, and as with all controllers, varying process characteristics degrade their performance. But, MPC controllers’ performance limitations also can be compensated with the same techniques that apply to basic regulatory controls. Further, MPC packages typically can switch among multiple models or adapt their models during operation.
Online adaptation, however, has to be applied with care. In the absence of sufficient, data-rich excitation, adaptation can easily degrade the model rather than improve it.
For controls in process industries, good response to load disturbances is more important than good response to set point changes. Feedforward structures can do a good job of rejecting measured disturbances, but unmeasured disturbances are the real problem. Being unmeasured, their effects are only evident when they begin to drive the controlled variables away from their control points.
Every control technology must rely on feedback to restore the desired operating point after an unmeasured disturbance, but the automatic-regressive terms in MPC algorithms are better at rejecting unmeasured disturbances than traditional controls. Also, prediction errors can provide additional feedforward inputs to improve dynamic response.
In general, rule-based controls do not provide control performance comparable to more advanced technologies, but can be the best choice when measurements are too sparse or other unusual conditions prevent the use of more mathematical algorithms.
Process operating objectives
Traditionally, operating objectives for control systems have been expressed in terms of stable operation at operator-entered set points. Advanced control requires a broader view, expressed in parameters that relate to maximum economic performance. Constraints and optimization become dominant concepts.
Process constraints add extra controlled variables to the control problem. Traditional PID control designs accommodate these requirements by using proven techniques such as auto-selector and valve position controllers.
The performance of these schemes is limited because they do not take action until the constraint is encountered, so the process can overshoot and/or cycle around the constraint limit. The ability of traditional PID systems to control at constraints is relatively poor.
MPC controllers do a much better job at controlling around constraints. Because MPC can predict future behavior, collisions with constraints can be anticipated. In response, control moves can be applied that will bring the process to rest against the constraint limits, like a boat approaching a pier.
MPC controls also accommodate variables that only need to be held within a defined range rather to a specific set point. Using predictions of future values, MPC can decide whether or not to apply control effort in the current execution to avoid future violations of these limits.
Traditional PID controls regulate a process at a given operating point, but do not determine its operating point. There are economic incentives for almost all processes to operate at an optimum point that depends on the value of the product compared against costs of feedstocks and utilities. Furthermore, for linear systems, this optimum operating point always lies at the intersection of two or more process constraints.
Because traditional controls do not perform well at constraints, they are typically operated at some distance on the safe side of process constraints, and therefore away from the optimum operating point. This can be an expensive limitation.
An implementation of an MPC controller typically (but not necessarily) includes an optimizer, which uses a model of the process and relevant cost factors to determine an optimized operating point for the process. The optimizer then communicates to the controller, which can drive toward this point through the term in the control law which responds to deviations from this target. Such action can provide significant economic benefits to the user.
System and application security
A control system that seeks maximum economic performance will be skating on thinner ice than a traditional control system. Still, safety and security are always essential and critical elements of the design.
Putting a process under the control of an advanced system is not permanent. In any real application, something will go wrong—usually sooner than later. Process variables go out of range. Sensors go bad. Transmitters go out of calibration. Critical equipment fails. Conditions arise that the advanced system cannot handle. At these times, the advanced control system must safely take itself offline until conditions are normal again and the advanced system can go back into service.
The independently operating bits and pieces of traditional systems have standard features for initialization, back-calculation, and bumpless transfer. In applying traditional PID controls, the engineer must design and configure these features with as much care as he or she applied to designing for the control problem itself. In more intricate applications that make use of advanced designs for feedforward, auto-selectors, multiple outputs, switching, and the like, proper design of these transitions can be as complicated as the overall control strategy, if not more so. Errors in these functions are difficult to isolate and debug because they are often infrequent and transitory. The gradual, piecemeal transition between offline and fully online can be difficult to manage bumplessly.
MPC implementations also require these features. But the highly integrated nature of MPC provides an advantage. Back-calculation and handshaking signals can be standardized and automatically configured. The all-variables-at-once nature of the transition is more easily made bumpless.
MPC implementations offer an additional advantage gained from the availability of a process model within the controller. Because the controller maintains a history of the input and output variables, it can use this model to provide a predicted measurement value if a transmitter should go offline. This prediction can’t be trusted for a great length of time, but it is especially useful during temporary conditions, such as calibration or cleaning, allowing the controller to function while a field measurement is temporarily unavailable.
The software and hardware for traditional controls is well developed and has a very high level of security. The operating systems and application software, including configuration tools, are very robust. This hardware and software security has advanced to the point that platforms and communications are fault tolerant, including redundant processors and networks that provide uninterrupted performance even if one fails. Backup files generated periodically that survive reboots and automatically restart are the norm.
The software for MPC is not quite so advanced. Until recently, MPC has required a level of computing power unavailable in distributed control systems, requiring intricate design and engineering of necessary communications between two systems. Network communications standards and packaged integration designs are rapidly closing this gap. As the application of model-based control becomes the norm for high performance systems, this technology’s fault tolerant capabilities will come to match that of traditional approaches.
Implementing an advanced control system is quite different from implementing a traditional control system, in several ways.
Up-front vs. back-end engineering —A traditional control system is assembled by linking a large number of block functions that exist independently of each other, but work interactively. Bits and pieces of its design are often changed during the course of the project through commissioning. While these changes can be the source of confusion, cause errors in documentation, and project delays, the flexibility provided by an environment of small independent elements has advantages. When control requirements are not well defined at the beginning of the project, and solutions to unanticipated constraints and other unrecognized problems need to be worked out, such flexibility can be very helpful.
By contrast, a model-based controller is much more integrated and difficult to change late in the project. For this reason, MPCs require a good deal more discipline in the initial project phases. Their design should be delayed until as much information as possible about process interactions has been gathered. System design should be carried to a high level of detail before beginning system implementation.
Modeling vs. tuning —PID controllers are generic functions and not specific to any process behavior. Tuning a controller is the activity that tailors it to a specific application. In essence, the final settings of a well-tuned controller reflect the steady state and dynamic characteristics of the process it controls. Typically, this tuning is the last step in commissioning a control system.
The situation is exactly reversed for MPCs. First step in applying the controller is to obtain data on process characteristics—by performing a PRBS test (probability random bit sequence)—and develop a dynamic process model. Success largely depends on the quality of this model. A controller using a bad model will perform badly, but this may not be evident until system commissioning.
Every effort should be made to avoid this situation, since retesting is very expensive and time consuming. The project should not proceed if there is a strong possibility of significant physical or operating changes in the process between testing and commissioning.
Cost and schedule —Traditional PID control systems are often less expensive and quicker to implement than advanced MPC designs for several reasons:
Formal process testing is not required;
There are fewer platform and communication issues; and,
A simple design can be more quickly implemented using standard established tools.
MPC applications are often more expensive to implement for complementary reasons:
Additional upfront testing is required;
There may be significant communication issues to solve, which could require custom programming; and,
Users may be generally unfamiliar with the implementation process and tools.
However, these differences can quickly vanish for larger applications where a PID control system has to include complex decoupling and constraint handling, which is standard with MPC systems. Moreover, in appropriate applications the benefits of process optimization will quickly recover any additional costs associated with advanced control.
Training, documentation, and maintainability —This issue is discussed last because training, documentation, and maintainability will ultimately determine the system’s long-term success. Too often, training and documentation do not get the time and money necessary to do the job properly. They are put off until the end of a project, when resources are exhausted and the priority is to get the system online, close the project, and move on.
PID control is a mature technology familiar to operators and control engineers alike. Generally speaking, operators understand operating modes—manual, automatic, remote and local—and know how to use them. Tools for implementing and maintaining PID controls are also familiar, and a well-designed and maintained system will have a high in-service factor.
MPC is also a proven technology, although less mature than PID. However, outside of the oil and gas industries where it is commonly used, it is not as well known or understood. Consequently, the decision to apply this technology often means additional training for engineers and operators.
In applying model-based controls, engineers and operators will encounter many new concepts, while much of what they know about control systems will have to be unlearned and relearned. The overlap in technical jargon can be especially confusing. New tools and techniques for implementing and maintaining the controller have to be learned as well.
More importantly, the technology changes the relationship between operator and control system. Under PID control, the operator is more involved on an ongoing basis—entering set points, putting sections of the control system in and out of automatic control, helping manually when necessary or desirable. The performance of these controls depends on how well the operator uses them.
MPCs and optimizers are much more on their own. The operator does not work with individual loops. Everything is in one controller and that controller is either controlling or not. The operator does not typically enter set points as that role is assumed by the optimizer. Instead, the operator takes the more important role of a process manager, entering high and low process constraints which circumscribe the operation of the controller and optimizer while leaving the controller to take necessary actions. At any time, of course, the operator can shed the MPC and return to traditional control or manual. While the MPC is running, however, it requires much more of a “hands-off” operation.
In this situation, training and documentation are critical. Training provides familiarity and confidence for operators. Most experienced operators justifiably take pride in the ability to control the process. Without training, some will consider any new system a threat. Others may not be eager to learn new tools and techniques. Many are primarily concerned with safe and stable operation, and may not be eager to accept a new strategy that takes the process into riskier conditions. Training can provide the bridge to this new style of operation.
One advantage of having a process model developed from test data is its ability to provide a simulation that can be used to train operators prior to commissioning. Operators can be trained well in advance and can make useful contributions to the design and its implementation, especially to its human interface.
Documentation provides maintainability. Over time, some conditions will inevitably change and modifications to the configuration or even the basic design may become necessary. Without good design and configuration documentation, making these changes can be difficult or even impossible. New operators and engineers may not be able understand the system well enough to make necessary changes. The result will be an expensive system falling into disuse and being abandoned, and the benefits it can provide being lost.
Help along the way
This series has spanned the scope of technology and management for advanced control projects. Its objective has been to aid control engineers and managers in effectively executing a project in a way that minimizes costs and maximizes benefits over the long term. This objective is realized by efficiently implementing a system that uses the most cost effective technology and remains in operation, delivering benefits, long after the project is complete.
Achieving this goal requires a good understanding of the capabilities, advantages, and limitations of various control technologies. At the same time, success also depends on understanding the human factor in process control and how implementation of an advanced control system will change roles and responsibilities of operators and control engineers.
Testing a commissioned system for economic performance is a relatively new requirement in the application of advanced control systems. Done properly, it can benefit both user and vendor.
Finally, beginning the application of an advanced control system with effective project management greatly increases probabilities that the project will end well. Picking the right project and properly managing its execution while providing quality training and documentation will provide the foundation for long-term success and a stream of economic benefits continuing indefinitely that will justify the cost of the project many times over.
Lew Gordon is a principal application engineer at Invensys;
Case Study Database
Get more exposure for your case study by uploading it to the Control Engineering case study database, where end-users can identify relevant solutions and explore what the experts are doing to effectively implement a variety of technology and productivity related projects.
These case studies provide examples of how knowledgeable solution providers have used technology, processes and people to create effective and successful implementations in real-world situations. Case studies can be completed by filling out a simple online form where you can outline the project title, abstract, and full story in 1500 words or less; upload photos, videos and a logo.
Click here to visit the Case Study Database and upload your case study.