Fixing PID, Part 2

Proportional-integral-derivative controllers may be ubiquitous, but they’re not perfect.
By Vance VanDoren, PhD, PE April 28, 2014

Part 1 of this series (Control Engineering, Nov. 2012) looked at several issues that limit the performance of the theoretical PID algorithm in real-world applications of feedback control.

All of these problems are exacerbated by uncertainty. Sometimes the controller lacks sufficient information about the controlled process to know how much and how long to apply a control effort. Sometimes the controller can’t even tell if it’s done a good job or how to do a better job in the future when sensor placement, physical limitations of the sensing technology, or measurement noise make the process variable hard to measure.

Measurement noise is particularly troublesome for a PID controller’s derivative action. To compute the "D" component of its next control effort, the controller computes the latest change in the error (the difference between the process variable and the setpoint) and multiplies that by the derivative gain or rate parameter.

When random electrical interference or other glitches in the sensor’s output cause the controller to see fictitious changes in the process variable, the controller’s derivative action increases or decreases unnecessarily. If the noise is particularly severe or the derivative gain is particularly high, the controller’s subsequent chaotic control efforts may be not only unnecessary, but also damaging to the actuator and perhaps even to the controlled process itself.

Filtering

The simplest solution to this problem is to reduce the derivative gain when measurement noise is high, but doing so limits its effectiveness. The measurement noise itself can sometimes be reduced by fixing the sensor or by filtering the process variable measurement mathematically. A process variable filter essentially averages the sensor’s most recent outputs to produce a better estimate of the process variable’s actual value.

However, process variable filters have their limitations. They only work if the measurement noise is truly random, sometimes increasing and sometimes decreasing the sensor’s output in equal measure. If those positive and negative blips also occur with equal frequency, then the filter’s averaging operation will tend to cancel them out. But if the measurement noise tends to skew the sensor’s output consistently in one direction or the other, the filtered process variable will tend to run consistently too high or too low, thereby deceiving the controller into working too hard or too little.

A process variable filter also slows the controller’s reaction time. If the filter is configured to average a particularly long sequence of sensor outputs, it will do a better job of cancelling out random blips, but it will also tend to miss the most recent changes in the actual process variable. The filter needs to see a sustained change in the sensor’s output before it can report a new value of the process variable to the controller. The controller can’t even see, let alone react to rapid, short-term changes in the process variable, as discussed in the "Filtering" section below.

As a compromise, some PID controllers can be configured to filter the process variable to differing degrees when computing the proportional, integral, and derivative actions. The derivative action requires the most filtering since that’s where measurement noise causes the most problems. The proportional action may benefit from less filtering (that is, a filter incorporating a shorter sequence of sensor outputs) in order to remain responsive to short-term changes in the process variable. And since the integral action itself serves as a filter, it may require no process variable filtering at all.

Alternately, a filter can be applied to the control effort instead of the process variable. Doing so permits the measurement noise to enter into the PID calculations (especially the "D"), but the noisy control effort that results is smoothed by the filter before reaching the actuator. A filter can also help slow down the control effort to prevent overly dramatic fluctuations in the process’s behavior in cases where the process is particularly sensitive to the actuator’s movements.

On the other hand, a filter on the control effort can make the process appear more sluggish than it really is. An operator looking for faster closed-loop performance might try re-tuning the controller to be more aggressive without realizing that the problem is the damping effect of the filter, not the process. The controller tuning and the control effort filter sometimes end up battling each other unnecessarily when different operators have implemented one without checking for the other.

Deadband

The effects of measurement noise can also be mitigated by simply ignoring insignificant changes in the sensor’s output under the assumption that they’re probably just artifacts of the measurement noise and are too small to make a difference in the controller’s choice of control efforts anyway. So long as the error between the process variable and the setpoint remains within a range known as the deadband, the controller simply does nothing.

The trick is determining how much of a change in the error is small enough to ignore. If the deadband is set too large, significant changes in the behavior of the process may be overlooked. But if it is set too small, the controller will react unnecessarily to every fictitious blip in the sensor’s output, even if the actual process variable has already reached the setpoint.

Unfortunately, a deadband also glosses over small changes in the setpoint. If an operator tries to move the process into a higher or lower operating range that falls within the current deadband, the resulting change in the error will be ignored and the controller will do nothing. If the deadband is too large, the controller’s precision will suffer. That is, it may be able to make a refrigerated space five degrees colder, but not just one.

Suppressing the derivative kick

On the other hand, there are occasions when a PID controller’s performance can be improved by deliberately ignoring setpoint changes. Once again, the controller’s derivative action is at the center of this issue.

Recall that the derivative action tends to add a dramatic spike or "kick" to the overall control effort when the error changes abruptly during a setpoint change. This forces the controller to act immediately without waiting for the proportional or integral action to take effect. Compared to a two-term PI controller, a full three-term PID controller can even appear to anticipate the level of effort that will ultimately be required to maintain the process variable at the new setpoint, especially when the controlled process is particularly fast.

But overly dramatic spikes in the control effort can do more harm than good in applications, such as room temperature control, that require slow and steady changes in the process variable. A blast of hot air following every adjustment to the thermostat would not only be uncomfortable for the occupants of the room but hard on the furnace as well.

In those cases, it is advantageous to forgo derivative action altogether or to calculate the derivative from the negative of the process variable rather than directly from the error between the setpoint and the process variable. If the setpoint is constant, the two calculations will be identical anyway. If the setpoint only changes in a stepwise manner, the two calculations will remain identical except at the instant when each step change is initiated, but using the negative derivative of the process variable in the calculation of the derivative action eliminates the spike present in the derivative of the error. See the "Derivative Kick" section below for more information.

The spike induced by a setpoint change can also be mitigated by filtering the setpoint just like the process variable or the control effort. However, a setpoint filter doesn’t remove noise so much as it takes advantage of the averaging operation’s tendency to make abrupt changes appear much slower. When filtered, stepwise changes in the setpoint appear to the controller to be more gradual, thereby eliminating the abrupt changes in the error that would otherwise lead to spikes in the control effort.

Setpoint filters can also be useful in cascade control applications. Filtering the setpoint for the inner loop also filters the control effort for the outer loop with all the advantages detailed above. For more details on how such filters are implemented, see "The Basics of Numerical Filtering," Control Engineering, September 2008.

Filtering

A numerical filter averages its input history Fin(0) through Fin(k) to generate its next output Fout(k+1). The most basic filter algorithm, known as "first order," adds a fraction of the most recent input to a fraction of the most recent output to create a running total:

That fraction α between 0 and 1 determines how much of the input history figures into the averaging operation according to this equivalent formula:

Choosing a value of α close to 0 as in Filter A above makes

so that the output sequence looks very much like the input sequence with limited filtering. Choosing a value of α close to 1 as in Filter B makes

so the output changes particularly slowly but incorporates a very long sequence of inputs into its average for maximum filtering. The fraction α is known as the filter’s time constant when describing how long it takes for the output to reach a steady state after the input has stopped changing. Alternately, α is known as the smoothing factor when describing how much filtering is being accomplished along the way. When applied to a PID controller’s process variable, setpoint, or control effort, a numerical filter can reduce the effects of measurement noise at the expense of closed-loop responsiveness or vice-versa, depending on the value chosen for α.

Derivative kick

Here two PID controllers are applied to the same process driven by the same sequence of setpoint values. At each sampling interval, Controller A calculates the derivative component of its control effort (the derivative action) by computing the latest change in the error between the setpoint and the process variable. Controller B computes its derivative action by subtracting the process variable from zero rather than the setpoint. If the process is relatively slow, as in this example, both methods will drive the process variable toward the setpoint along more-or-less the same trajectory. However, Controller A will tend to "kick" the actuator toward 0% or 100% capacity for an instant following each setpoint change as shown by the green spikes at times 1, 2, and 3. Controller B avoids those potentially damaging spikes but otherwise generates roughly the same control efforts as Controller A.

And for more ways to fix PID problems, including time delays, modeling, integrating process, multi-variable control, and loop tuning, watch for the next installment of this series. 

Vance VanDoren, PhD, PE, is Control Engineering contributing content specialist. Reach him at controleng@msn.com

Key concepts:

  • PID controllers are deployed in many process applications, but not always thoroughly understood.
  • Manufacturers of PID controllers adjust the mechanism at various times to solve specific control challenges.

ONLINE

Read more about PID control strategy and tuning below.

Want this article on your website? Click here to sign up for a free account in ContentStream® and make that happen.