Sampled Control, Better Results With Less Data
Control Engineering classic: In this article, the consequences of performing feedback control with sampled rather than continuous data are examined.
This is the third in a series of four tutorials on the fundamentals of process control. Part 1, in February, examined PID control . Part 2, in May, discussed the Smith Predictor . Part 4, in December, will look at multivariable control .
Continuous control applications involve process variables that can change at any instant–flow, temperature, and pressure, being prime examples. The process industries, particularly petrochemicals, were once heavily dependent on continuous control systems that could manipulate continuous process variables at all times.
True continuous control systems are virtually extinct now. Gone are the electrical, pneumatic, and mechanical devices that could apply corrective forces directly to a process by mechanically magnifying the forces generated by the process sensors. The most common continuous controllers left are found in the bathrooms of private homes. A toilet’s level control system measures the level in the tank and slowly closes the inflow valve as the level rises (see Figure 1).
|Figure 1: Pictured is a familiar continuous control system. As the water level rises, the valve closes and prevents any further flow. Source: Control Engineering|
In contrast, discrete variables that change only at specified intervals are subject to discrete control. An assembly line is a classic example. The count-of-completed-assemblies is a discrete variable that changes only at the instant when the line moves forward and a finished product rolls off the line. A discrete control system can manipulate a discrete variable only when the schedule calls for the next operation.
Continuous and discrete control systems behave very differently and are generally designed according to different mathematical principles. However, the two come together in sampled control applications where the process variables change continuously, but can only be measured at discrete intervals (see Figure 2).
|Figure 2: This is a sampled control system. Switch inthe sampler cloes periodically and sends electronic measurement to the piston. Source: Control Engineering|
All computer-based controllers perform sampled control. Whether it is part of a distributed control system, a single-loop controller, or a PC-based controller, a control computer must wait to measure the process variables until its program calls for the next round of sensor readings.
Once the measurements have been read into memory, the computer must take time to analyze the data and compute the appropriate control actions. Only then can the computer return to reading the sensors. It may be just a matter of milliseconds between readings, but during that sampling interval, the computer cannot ‘see’ what’s going on in the process.
On the other hand, a true continuous controller never stops measuring the process variables. It receives a continuous signal from its sensors and applies a continuous stream of control efforts to the process. The sampling interval is effectively zero. In the level control example mentioned earlier, the controller reacts to every infinitesimal change in the tank level, with no time out for any computations.
The design of any computer-based control system must address the scarcity of data that results from sampling. The simplest and most popular design strategy is to run the computer so fast that the sampling interval approaches zero and the sampled signal appears continuous. This allows the computer to use control algorithms based on the more traditional principles of continuous control.
Shortening the sampling interval also seems like a good way to prevent fluctuations in the process variables from going entirely unnoticed between samples. However, excessively fast sampling rates waste computing resources that might be better used for other purposes, such as interfacing with the operator or logging historical data.
Furthermore, it is possible to achieve comparable results with slower sampling rates, provided the original continuous signal can be reconstructed from the sampled data. Figure 3 shows how this can be accomplished for a particularly simple application where the original signal is known to be a low-frequency sine wave. The original sine wave (Figure 3a) has been sampled by a control computer, resulting in a sample set (Figure 3b). In Figure 3c, the computer has found the lowest frequency sine wave that fits the sampled data. For this application, the computer needed only two samples from each cycle of the sine wave to completely reconstruct the original signal. Any additional samples would have been superfluous.
|A square wave or even a higher frequency sine wave could have generated the data samples shown in Figure 3b. Source: Control Engineering|
Not so simple
Alas, signal reconstruction is never this simple. A square wave or even a higher frequency sine wave could have generated the data samples shown in Figure 3b. The control computer could not have distinguished one from the other.
It may also seem unrealistic to have the control computer look for a sine wave that best fits the data samples. After all, most real process variables don’t oscillate sinusoidally unless forced to do so. However, any signal that is not a sine wave can be expressed as a sum of sine waves, a theorem first proven by mathematician Joseph Fourier in 1822. Fourier also showed how to compute the frequency and amplitude of each sine wave in that sum, using only the sampled data. It is this algorithm, known as the Fourier Transform, that allows a control computer to reconstruct a continuous signal from sampled data.
Unfortunately, the Fourier Transform cannot guarantee the accuracy of the reconstructed signal. If enough of the original signal is lost between samples, the signal cannot be completely reconstructed by any means. The Fourier Transform can be used to determine if any particular signal will survive sampling and reconstruction intact. Consider each of the sine waves that the Fourier Transform identifies as a component of the original signal. If the sampling rate is fast enough to ‘capture’ each of those components as in Figure 3, then the entire signal will be captured in the sampled data as well. Otherwise, the higher frequency components will be lost and the reconstructed signal will not match the original.
So how fast is fast enough for sampling? The answer depends on the process to be controlled. Most industrial processes involve some combination of friction, inertia, and system stiffness that prevents a process variable from changing rapidly. The temperature in an annealing oven, for example, may change by only a few degrees over the course of several hours. For processes like these, high-frequency fluctuations in the process variable simply aren’t possible. Fast sampling is not required to capture the entire signal. Motion control applications, on the other hand, do involve rapid changes in the position and velocity variables that the control computer won’t see unless it samples as fast as it can.
The required sampling rate can be determined experimentally. The simplest test involves stimulating the process with a sinusoidal control effort of ever increasing frequency. The process variable will eventually start oscillating at the same frequency as the control effort, but with ever decreasing amplitude. The process will refuse to oscillate at all once the stimulation reaches a high enough frequency.
Since the process cannot oscillate any faster than this frequency, there is no point in having the control computer look for any higher frequency components when reconstructing the process variable’s signal. Note, however, that capturing any one of the signal’s sinusoidal components requires two samples from every cycle of the sine wave (as in Figure 3b). Thus, if the maximum frequency to be captured is determined to be w max , the controller must be set to sample at a frequency of 2 w max . Conversely, if the sampling frequency is fixed at w s , then the highest frequency component that can be successfully reconstructed from the sampled data is w s /2. This is the Nyquist frequency, first published by Bell Labs physicist Harry Nyquist in 1928.
Once the required sampling rate has been determined, there are literally hundreds of techniques for analyzing the sampled data and generating the appropriate control effort. More on some of those methods will be presented in future installments of this series.
For more information, contact Dr. Vance J. VanDoren , VanDoren Industries, 3220 State Road 26W, West Lafayette, IN 47906; Tel: 317/497-3367, ext. 8262; Fax: 317/497-4875.