Adaptive Controllers Work Smarter, not Harder

Every process controller is "adaptive" in the sense that it changes its output in response to a change in the error between the setpoint and the process variable. However, an "adaptive controller" can adapt not only its output, but its underlying control strategy as well. It can tune its own parameters or otherwise modify its own control law so as to accommodate fundamental changes in the beh...

By Vance J. VanDoren October 1, 2002

KEY WORDS

Process and advanced control

Artificial intelligence

Control software

Loop controller

Model-predictive control

Neural network

Every process controller is ‘adaptive’ in the sense that it changes its output in response to a change in the error between the setpoint and the process variable. However, an ‘adaptive controller’ can adapt not only its output, but its underlying control strategy as well. It can tune its own parameters or otherwise modify its own control law so as to accommodate fundamental changes in the behavior of the process. Hundreds of techniques for adaptive control have been developed for a wide variety of academic, defense, and industrial applications. However, very few adaptive controllers available as commercial products are capable of updating their control strategies while the process is running. These include:

QuickStudy from Adaptive Resources (Pittsburgh, PA);

Exact and Connoisseur from the Foxboro Co. (Foxboro, MA);

BrainWave from Universal Dynamics Technologies (Vancouver, BC, Canada);

CyboCon from CyboSoft (Rancho Cordova, CA);

Intune from ControlSoft (Highland Heights, OH) ,and

KnowledgeScape from KnowledgeScape Systems (Salt Lake City, UT).

Problems with PID

Traditional non-adaptive controllers are generally ‘good enough’ for most industrial process control applications. The ubiquitous PID controller is especially cheap and easy to implement, and its simple control strategy makes it fairly easy to understand and to diagnose when it fails to perform as desired. However, a PID controller leaves considerable room for improvement. Once tuned, it can only control the process with which it started. If the behavior of the process changes appreciably after start-up, the controller may no longer be able to counteract disturbances. If the mismatch between process behavior and the controller’s original tuning becomes particularly severe, the closed-loop system may even become unstable.

The traditional fix for time-varying process behavior is to start over and manually retune the loop whenever its performance degrades. That may not be particularly difficult, but repeatedly tuning and retuning a loop can be tedious and time consuming, especially if the process takes hours to respond to a tuning test. Manual retuning may not even be possible if the behavior of the process changes too frequently, too rapidly, or too much.

Pros and cons of adaptive control

Convenience is one of the most compelling reasons to replace PID loops with adaptive controllers. A controller able to continuously adapt itself to the current behavior of the process relieves the need for manual tuning both at start-up and thereafter.

Adaptive controllers also can outperform their fixed-parameter counterparts in terms of efficiency. They can often eliminate errors faster and with fewer fluctuations, allowing the process to operate closer to its constraints-where profitability is highest. This is particularly advantageous in industries such as petrochemicals and aerospace where every ounce of performance counts for reasons of profits and safety.

On the other hand, adaptive controllers are much more complex than traditional PID loops. Considerable technical expertise is required to understand how they work and how to fix them when they fail. Fortunately, commercial adaptive controllers are generally designed to make technical operating details transparent to the user. It really isn’t necessary to fully understand them to use them. Still, some basic features of current adaptive control technology merit closer examination, especially by a would-be user deciding which approach to take.

Model-based techniques

Arguably the most obvious approach to adaptive control is to employ the same model-based control theories used for decades to design traditional fixed-parameter controllers. The basic idea is to use the process’ historical behavior to predict its future. Historical behavior is represented by a mathematical model that describes how inputs to the process have affected its outputs in the past. Assuming the same relationship will continue to apply in the future, the controller can then use the model to select future control actions that will most effectively drive the process in the right direction.

Adaptive model-based controllers like Exact, BrainWave, QuickStudy, and Connoisseur take that concept one step further. They generate their models automatically while the controller is on-line, using historical process data recorded previously. This is not only convenient when compared to designing controllers by hand, it permits on-going updates to the model so that, in theory, the controller can continue to predict the future of the process accurately even if its behavior changes over time.

There are essentially three basic approaches for generating or identifying a process model. They are first principles, pattern recognition, and numerical curve fitting.

First principles were once the basis on which all model-based controllers were designed. An engineer would configure a first- or second-order differential equation to represent the behavior of the process according to whichever laws of physics, chemistry, or thermodynamics happened to apply to that process.

First principles models are still used extensively today, but since they require an analysis of the process’ governing principles, they can not be generated automatically. Furthermore, some modern processes (especially in the petrochemical and food industries) are so large and complex that their governing principles are too convoluted to sort out analytically. It may be clear from the general behavior of the process that it is governed by a differential equation of some sort, but the specific parameters of the model may be difficult to derive from first principles.

Alternative ways to identify

Pattern recognition was one of the first alternatives proposed to handle this situation. By comparing patterns in the process data with similar patterns characteristic of known differential equations, the controller could deduce suitable parameters for the unknown process model. Such patterns might include the frequency at which the process output oscillates as the controller attempts to counteract a disturbance, or the rate at which the process output decays when the setpoint is lowered.

Pattern recognition techniques have succeeded in reducing the model identification problem to a matter of mathematics rather than physical principles, but they have their limitations as well. There’s no guarantee that the process will demonstrate the patterns that the controller is programmed to recognize. For example, the Exact controller looks for decaying oscillations in the process output after a disturbance. It deduces the process model by analyzing the size and interval between successive peaks and troughs. But if that response is not oscillatory or if the oscillations do not decay, it has to resort to an alternative set of expert rules to compute the model parameters.

Another alternative to first principles modeling is to compute the parameters of a generic equation that best fits the process data in a strictly numerical sense. Such empirical models are convenient in that they require no particular technical expertise to develop and they can be updated online for adaptive control purposes.

Curve-fitting challenges

However, such numerical curve-fitting may not be able to capture the behavior of the process as accurately as first principles, especially in the presence of measurement noise, frequent disturbances, or nonlinear behavior. There’s also a more insidious risk in relying on an empirical model for adaptive control. It can fit the data perfectly, yet still be wrong!

This problem is easy to spot when the input/output data are all zeros while the process is inactive. Any equation would fit that data equally well, so the modeling operation can simply be suspended until more interesting or persistently exciting data become available. The real trouble starts when the process becomes active, but not quite active enough. Under those conditions, the mathematical problem that must be solved to determine the model parameters can have multiple solutions. Worse still, it’s generally not obvious whether the controller has picked the right solution or not.

Fortunately, there are ways to work around the persistent excitation problem. Some adaptive controllers will simply generate their own artificial disturbances (typically a temporary setpoint change) to probe the process for useful input/output data. QuickStudy, on the other hand, uses a statistical modeling technique designed to eliminate the need for such artificial process stimulation. It makes do with just historical data available from normal closed-loop operations.

BrainWave and the Exact controller can be configured to apply user-defined step tests to the process or simply wait for naturally occurring disturbances to come along. BrainWave can also be configured to add a pseudo random binary sequence (an approximation of white noise) to the existing setpoint. This approach attempts to elicit useful data from the process without disturbing normal operations ‘too much.’ Connoisseur offers all three of these options.

More challenges

A related problem can occur when the input/output data are flat because the controller has been successful at matching the process output to the setpoint. Should something happen to alter the process during that period of inactivity, its subsequent behavior may well differ from what the controller expects. Unless the controller first manages to collect new process data somehow, it could be caught completely off guard when the next natural disturbance or setpoint change comes along. It would most likely have to spend time identifying a new model before it would be able to retake control of the process. In the interim, errors would continue to accumulate and the control system’s performance would degrade. It could even become unstable, much like a PID controller with tuning that no longer matches the process.

On the other hand, this tends to be a self-limiting problem. Any fluctuations in the input/output data that result from this period of poor control would be rich with data for the modeling operation. The resulting model may even be more accurate than the one it replaces, leading ultimately to better control than before.

Modeling a process while the controller is operating poses yet another subtle problem. The mathematical relationship between the process’ input and output data will be governed not only by the behavior of the process, but by the behavior of the controller as well. That’s because the controller feeds the process output measurements back into the process as inputs (after subtracting the setpoint), which gives the control law a chance to impose a mathematical relationship on the input/output data that has nothing to do with the behavior of the process itself.

Filtering helps

As a result, an adaptive controller will get inaccurate results if it tries to identify the process model from just the raw input/output data. It has to take into account the input/output relationship imposed by the controller as well as the relationship imposed by the process. Otherwise, the resulting process model could turn out to be the negative inverse of the controller. Connoisseur and QuickStudy get around this problem by statistically filtering the input/output data so as to distinguish the effects of the controller from the effects of the process.

Then there’s the problem of noise and disturbances imposing fictitious patterns on the process outputs. A load on the process can cause a sudden change in the output measurements, even if the process inputs haven’t changed. Sensor noise can cause an apparent change in the process variable simply by corrupting the sensors’ measurements. Either way, an adaptive controller collecting input/output data at the time of the noise or the disturbance will get an inaccurate picture of how the process is behaving.

Most adaptive controllers work around the disturbance problem the way BrainWave does by initiating its identification tests only while the process is in a steady state-that is, after it has finished responding to the last disturbance or setpoint change. Effects of measurement noise can also be mitigated by applying statistical filters to raw measurements á la Connoisseur, or by employing a modeling procedure unaffected by noise.

Model-free technique

Although popular, model-based techniques are not the only means of implementing adaptive controllers. After all, creating a model does not actually add any new information to the input/output data that every controller collects anyway. It does organize the raw data into a convenient form from which a control law can be derived, but it should theoretically be possible to translate the input/output data directly into control actions without first creating any process model at all.

CyboCon skips the modeling step and all of the problems that go with it. Instead, CyboCon looks for patterns in the recent errors. This learning algorithm produces a set of gains or weighting factors that are then used as parameters for the control law. It increases the weighting factors that have proven most effective at minimizing the error while decreasing the others. The weighting factors are updated at each sampling interval to include the effects of the last control action and recent changes in the process behavior.

It could be argued that weighting factors implicitly constitute just another form of process model, but they generally do not converge to values with any particular physical significance. They change when the behavior of the process changes, yet their individual values mean nothing otherwise. Furthermore, weighting factors in the control law can legitimately converge to values of zero. In fact, they do so every time the process becomes inactive. That in turn produces a zero control effort, which is exactly what is needed when the error is already zero-when there are no disturbances to counteract nor any setpoint changes to implement.

Pros and cons of model-free

Arguably the most significant advantage of this strategy is that it avoids the tradeoff between good modeling vs. good control that plagues most model-based techniques. When the process is inactive, CyboCon doesn’t continue to look for meaning in the flatline data. It simply attempts no corrective actions and continues waiting for something interesting to happen.

Academics will also appreciate CyboCon’s closed-loop stability conditions, which turn out to be fairly easy to meet. Under these conditions, CyboCon will always be able to reduce the error without causing the closed-loop system to become unstable.

That’s a hard promise for an adaptive controller to make. For most model-based techniques, it is possible to specify conditions under which an accurate model will eventually be found and how the closed-loop system will behave once the model is in hand. However, it is not generally possible to determine exactly how the closed-loop system will behave in the interim while the model is still developing (though BrainWave is a notable exception). The developers of Connoisseur recognize this fact and strongly recommend that modeling be conducted off-line if at all possible or for short periods on-line under close operator supervision.

Perhaps the biggest drawback to CyboCon is its virtually unintelligible control strategy. Even CyboCon’s developers can’t explain exactly what it’s doing minute by minute as it generates each successive control effort. Only the end results are predictable. Furthermore, CyboCon’s technology departs so dramatically from classical and even modern control theory that there are just a handful of academics, and even fewer practicing engineers, who actually understand why and how it works. Most users will simply have to assume that it does.

Rule-based techniques

Although model-based and model-free techniques differ in their use of process models, they are similar in the sense that both use mathematical relationships to compute their control actions. Rule-based controllers, on the other hand, use qualitative rather than quantitative data to capture past experience and process history.

There are essentially two ways to use expert rules for adaptive control, both of which are more heuristic than analytical. An ‘expert operator’ controller like KnowledgeScape manipulates the actuators directly. It acts like an experienced operator who knows just which valves to open and by how much. The rules rather than a mathematical equation serve as the control law.

An ‘expert engineer’ controller like Intune uses a traditional control equation, but tunes its parameters according to a set of expert rules. This could be as simple as applying the closed-loop Ziegler-Nichols tuning rules to a PID controller, or as complicated as a home-grown tuning regimen developed over many years of trial and error with a specific process. The rules incorporate the expert engineer’s tuning abilities rather than the expert operator’s skill at manually controlling the process.

Variations

Formats for such rules can vary widely, though they usually take the form of logical cause-and-effect relationships, such as IF-THEN-ELSE statements. For example, expert operator rules for a cooling process might include ‘IF the process temperature is above 100 degrees THEN open the cooling water valve by an additional 20%.’ An expert engineer rule might be ‘IF the closed loop system is oscillating continuously THEN reduce the controller gain by 50%.’

Most expert system controllers are not adaptive since the experts who programmed them fix their control rules. It would theoretically be possible to modify or add to those rules on-line, but that would require the on-going involvement of an expert, and that’s not the point of adaptive control.

KnowledgeScape, on the other hand, can adapt without changing its rule set. It uses a predictive process model so that the rules can be applied to the future as well as the present conditions of the process. And since that model can be updated online with recent process data, KnowledgeScape can adapt to changes in the behavior of the process. Intune also makes do with a fixed set of expert rules by using them not to manipulate the controller’s output directly, but to adjust the tuning parameters of a traditional controller. The tuning rules themselves are the generic elements that allow the controller to adapt itself to the current behavior of the process.

Pros and cons of rule-based

Rule-based controllers are particularly easy to expand and enhance. Individual rules can be added or modified without revising the rest of the current set. This generally can’t be done automatically, but it does make a rule-based controller flexible. Furthermore, if every new rule makes sense by itself and does not directly contradict any existing rule, the overall control strategy can be much easier to validate than an equally complex equation-based control strategy.

Expanding a model-based controller is generally not as easy since changing to a new model format generally requires starting again from scratch (though once again BrainWave is a notable exception). Rule-based controllers also have the advantage of being unaffected by the persistent excitation problem, since they don’t require process models in the first place. In fact, Intune’s developers evolved from a model-based to a rule-based adaptive control strategy in large part to avoid problems inherent with on-line process modeling, and also to achieve more efficient and reliable tuning in general.

Still, the inexact nature of rule-based control is a double-edged sword. It frees the controller from some mathematical limitations suffered by model-based techniques, but it also makes stability and convergence difficult to assess. There are no mature mathematical principles available to determine when, how, or even if the controller will be able to counteract a particular disturbance, either through direct manipulation of the actuators or indirectly through loop tuning.

Pick one!

So which adaptive controller works best? Some have longer track records as commercial products, some have attractive ancillary functions, some are more effective for particular control problems, and some are simply easier to use than others. Unfortunately, this field is still so young that a clearly superior technology has yet to emerge. The best adaptive controller for a particular job may simply be whatever works.

Comments? E-mail Vance VanDoren: controleng@msn.com