Model-based Reactor Control

Future trajectory of a controlled variable is the consequence of the recent and current values of the variables that affect it. The concepts of feed-forward and decoupling control, presented earlier in this series, are steps toward using this idea, but they are limited to steady-state relationships. The dynamic histories of the variables are not included in the design of feed-forward systems.

By Lew Gordon November 1, 2005

Prior articles in the Control Methods series:

Basic Regulatory Control

Advanced Regulatory Control: Adaption and Feedforward

Advanced Regulatory Control: Decoupling

Advanced Process Control: Fuzzy Logic and Expert Systems

AT A GLANCE

Process modeling

PID

Model predictive control

Control objectives

Manipulated variables

Future trajectory of a controlled variable is the consequence of the recent and current values of the variables that affect it. The concepts of feed-forward and decoupling control, presented earlier in this series, are steps toward using this idea, but they are limited to steady-state relationships. The dynamic histories of the variables are not included in the design of feed-forward systems. Process dynamics are addressed only by simple lead/lag functions.The advantage gained in predicting the future path of a controlled variable is striking. If the trajectory of a controlled variable can be predicted with reasonable accuracy for some distance into the future, then the control problem is reduced to a basic question:

Given the recent histories of the manipulated and disturbance variables, what current control actions will cause the desired future behavior in the controlled variable?

At each execution, the algorithm uses the predictions of future values to identify the current actions that will push the controlled variable along the desired path. This algorithm repeats at some control interval, typically every one to five minutes, depending on the process dynamics and the number of variables in the process.

Although the idea is simple, implementation is not. Determining the best current moves for a set of manipulated variables requires three basic components:

A dynamic model of the process that predicts future process behavior from recent and current inputs;

An algorithm to predict and evaluate possible trajectories and to select the best one; and

A control platform with sufficient computing power to make these predictions and evaluations at short control intervals.

Model predictive control (MPC) is not a new technology. It has been available and applied since the early 1990s, initially in the oil and gas industry. Early on, MPC could only be accomplished on mainframe computers, not easily integrated with DCS platforms, thereby limiting MPC to relatively few applications. More recently, the power of smaller inexpensive platforms has exploded, and networking is now almost a transparent issue. In this environment, MPC is fast becoming a tool available for the most commonplace control applications, even single-loop controllers.

Present, past, future

The starting point for this discussion of MPC is the same as for the previous article in this series

output = K P (e + K I D dc/dt)

where:

output = control output signal value;

e = controller error (measurement – set point);

c = controller measurement signal value;

K P = proportional mode gain;

K I = integral mode gain; and

K D = derivative mode gain.

This article considers this algorithm from another point of view

The three elements of this algorithm examine these periods of time. The proportional term responds to the value of the error at the present time

The integral term responds to the past conditions of the controlled variable. The value of the integral term is the integral of the error since the controller transferred to automatic control . Its contribution is critical, since integral action eliminates error in the steady state under varying load conditions.

I/O models for the target reactor show the response of product temperature to steam flow.

The derivative term looks toward the future. The derivative of the measurement, dc/dt, is the slope of the measurement change. Its sign and magnitude indicate whether the measurement is increasing or decreasing and how fast it is changing; in short, where the measurement is heading . The controller responds accordingly.

Derivative’s peek into the future can be helpful to a controller, but its benefit is limited in two important ways. First, probably less than 5% of all PID controllers in service apply derivative action. It’s not commonly understood and more difficult to tune than proportional or integral action. It can be very sensitive to noise in the measurement signal. Almost all loops can get along without it and so it is often ignored. Second, derivative only responds to the instantaneous directionof the measurement at the moment of algorithm execution

But a consideration of the future is often the most useful component of the control decisions we make in everyday life.

Modeling a multivariable process

There are many ways to ‘model’ a process. The term says nothing about a model’s form, complexity, or accuracy. In general, the concept only implies that there is some defined relationship between process inputs

Relationships can be either steady-state equations, as in feed-forward control, or dynamic functions. Dynamic models predict transient and steady-state values over some period of time.

A model can be based on ‘first principles’ or be ’empirical’ in nature. First principles models use fundamental physical laws, usually expressed as differential equations describing mass and energy balances integrated across the operating regions. By contrast, empirical models are developed from process operating data. Empirical models can be mathematical, developed by regression and curve-fitting techniques, or structured in some non-mathematical form.

Empirical models typically are better for process control. They can be more accurate because they evolve directly from actual process behavior. Non-mathematical models are usually superior because data fitted to a pre-defined mathematical form, such as dead time + first order lag, is always an approximation.

Models used in MPC packages are typically empirical and non-mathematical. They usually take the form of a set of coefficients applied to recent values of the inputs to predict future value of the outputs. Such models are either finite impulse response (FIR) models or auto-regressive with exogenous (ARX) inputs models. FIR models use only the independent variables

‘Reactor temperature model’ graphic shows one of the input/output models for the target reactor. Specifically, it shows the model for the response of product temperature to steam flow.

Three possibilities for measurement away from its setpoint follow either a set-point change or a process upset.

There 120 coefficients in this model, represented by 120 vertical bars. Each coefficient defines the gain to be applied to a particular historical sample input value. In this model, the interval between samples is four seconds. The 120 coefficients represent 480 seconds, or eight minutes of time. In other words, the model uses the influence of the last eight minutes of steam flow to predict the temperature four seconds into the future.

Twelve coefficients for the most recent values of steam flow (reading from left to right) are essentially zero; they have no influence at all. This reflects the dead time in the process responsesteam flow have less impact on current temperature changes, so the coefficients again approach zero. Samples more than eight minutes old have no effect at all.

Linear and non-linear

A process model can be either linear or non-linear. Saying a model is linear implies two things. First, if it is a mathematical model, none of the variables in the model are raised to any power

Non-linear empirical models are usually neural net models. Such a model uses a network of summing junctions and node functions arranged in one or more ‘layers’ to combine the effects of multiple input variables into multiple output variables. For a dynamic neural net model, each historical sample time has its own input and each future prediction instant has its own output. Depending on the number of variables, the amount of history for each, and the number of prediction intervals into the future, these models can become large and complex.

The MPC concept does not require that the model it uses be either linear or non-linear. Commercial packages use a range of linear and non-linear, mathematical, and non-mathematical model-forms. Historically, the technique has typically used linear combinations of individual input/output relationships, mostly because of the complexity of non-linear dynamic models and the computing power they required. But as computing power has increased, the application of non-linear models is becoming more common, and this trend will certainly continue.

There is much discussion in print about the relative merits of linear vs. non-linear models. Much of this discussion implies that if a process is at all non-linear, the controller must also be non-linear or control will be unacceptable. This is seldom true.

Most often, the operating point of a process does not vary enough for the non-linearities in the process to be a critical issue. Applying linear techniques around an operating point is usually satisfactory. Only a few processes, such as pH control, are so non-linear that a linear controller is inadequate when aided by the linearization techniques discussed in the second article in this series (March 2005). All MPC packages can implement these techniques.

It is always more difficult, expensive, and time-consuming to develop a non-linear model. Neural net models typically require much more model development data because the accuracy of their predictions is poor for conditions beyond those represented by the test data. Neural net models often demonstrate what is known as ‘over-fitting’

Accuracy of self-teaching models is dependent on the quality of the input data. Unless data are rich in input variation and process response, the system may try to model the random variations always present due to process noise. This will yield poor models for control.

By contrast, linear models are simpler and less expensive to develop, and they are usually more robust in service. This is the approach used for predictive control of the target reactor in this series.

Control objectives

Performance of a multivariable controller is strictly limited by the concept of degrees of freedom. This concept dictates that the number of achievable control objectives for a system is limited to the number of available manipulated variables.

A control objective can be either:

Holding a controlled variable at a specific setpoint or within a defined range, or

Driving a manipulated variable to a final target value.

For example, the target reactor in this series has three manipulated variables. Thus it can only achieve three control objectives. In this controller, the objectives are the setpoints for product flow rate, product composition, and product temperature.

The structure of a basic PID control loop forces compliance with this principle. A single loop controller has one measurement and manipulates one output. But a multi-variable controller may have many measurements and many outputs. There is no requirement that the numbers be equal. In fact, almost invariably there are more measurements than outputs.

Many model predictive controllers use a priority structure to decide which objectives to satisfy and which to ignore at any moment. Similarly, many predictive controllers ignore the specific value of a controlled variable whose objective is to remain within a defined range when it is not projected to violate these limits.

Alternatively, many controllers use a quadratic programming algorithm to compromise among all the objectives when they cannot all be satisfied. These algorithms may include an importance factor to affect how the compromise is distributed among objectives.

Choosing control response

Finally, the controller’s task is to move the process from its current conditions toward a desired operating point. But there are many paths to get from here to there.

‘Three controller trajectories’ graphic shows the general problem for a measurement away from its setpoint, following either a set point change or a process upset. There are an infinite number of paths that can be followed to eliminate the error over time, each one associated with a different manipulated variable path.

Case 1 indicates a more aggressive control action, which drives the measurement to setpoint sooner with more overshoot and valve action. Case 3 indicates less aggressive control that drives the measurement to setpoint more slowly with less valve action. Case 2 represents a compromise between quickly reducing error and minimizing valve movement.

Within a model predictive controller, at each execution the algorithm uses a set of user-specified weighting factors applied to CV error, MV movement, and MV target error, to evaluate an index along each predicted path. The set of moves found to have the lowest index value, or cost, is the one applied at that execution. By changing the relative weights, the user can influence the contribution from each factor and the path the controller selects. This is analogous to controller tuning in conventional controls.

The next installment in this series will apply MPC to the target reactor, comparing its performance to basic and advanced regulatory control, and rule-based control.

Lew Gordon is a principal application engineer at Invensys; www.invensys.com