Proving Control System Performance

It used to be straightforward. When new instrumentation or automatic control applications were needed, the necessary equipment was purchased, installed, and commissioned. When it was running smoothly, everyone moved on to the next project. It's not that simple anymore—performance usually has to be proven.

By Lew Gordon July 1, 2006

AT A GLANCE

Control performance

Key performance indicators

Performance-based contracts

Statistically valid testing

Condition monitoring

It used to be straightforward. When new instrumentation or automatic control applications were needed, the necessary equipment was purchased, installed, and commissioned. When it was running smoothly, everyone moved on to the next project. It’s not that simple anymore—performance usually has to be proven.

In an advanced control system project, performance testing picks up where basic commissioning leaves off. It tests the operation of the control system as an integrated entity against its functional objectives. Identifying indicators and methods of measuring and testing system performance against project goals can be as important as identifying the goals themselves.

A statistically valid test plan provides a rigorous mechanism for testing system performance. It provides results that allow a user and vendor to share project risks and rewards, to their mutual benefit.

Why test performance?

Achieving basic operation is virtually never an issue—all reputable vendors will ensure that transmitters will measure and transmit signals, that a DCS will communicate and function according to its configuration and programming, and that final actuators will respond to their control signals. So, aside from a healthy curiosity, what are the reasons for performance testing?

The most obvious and necessary reason is to debug and tune the application as part of basic system commissioning. A closely related reason is to establish operator confidence in the system. Operating a process plant is a responsibility that can veer unexpectedly and abruptly from quiet boredom into dangerous and overwhelming confusion. At the first sign of trouble, operators will quickly disable a system they do not understand and switch manual control, thereby losing all benefits the system can provide.

Performance testing may also be required for internal budgeting and accounting. It’s common for engineering resources to be concentrated at a corporate level and loaned out to operating plants for specific projects. Testing may be the activity that marks the transfer of operating and engineering costs between departments.

Testing by explanation and demonstration is sufficient for any of these reasons. Increasingly, however, vendor contracts require a formal performance test. Advanced control projects are often justified on the expectation that the system will deliver specific economic benefits—increased production, better product quality, or higher operating efficiency. Contracts are often structured so that achieved benefits affect the payments made to the vendor. Therefore, performance has to be proven before the contract can be closed. This is not a simple task.

What to measure

Deciding how to measure performance is challenging because the variety of possible metrics has expanded with the increasing power of digital systems to collect and process data. Statistical techniques are essential for reducing this mountain of data to meaningful indicators.

The two basic issues are:

Will the testing measure control performance or economic performance?

Will the performance be measured in absolute or relative terms?

Control performance measures system behavior in non-economic terms. Over the last decade, the art of measuring control performance has evolved from simple statistics calculated from data collected by basic historians into the discipline of “condition monitoring.” Many vendors offer packages to monitor and evaluate the health and performance of control systems in real time.

Data collection

PlantTriage by Expertune ( www.expertune.com ) is one example. It collects data necessary to evaluate up to 30 metrics for each control loop it monitors. Metrics range from familiar indicators such as integrated squared error (ISE) to diagnostic indicators such as the total amount of valve travel and number of reversals. Improvements in these metrics can quantify the performance improvement an advanced control system can deliver.

Start and end times for data collection influences comparison of advanced process control and basic regulatory control.

Process capability is another concept that quantifies control performance using statistics. It examines the frequency distribution of a variable around its set point, measured in standard deviations, in relation to the variable’s high and low specification limits. Capability indices of several forms have been defined based on these parameters. Their form is such that the index value increases when tighter control reduces the standard deviation of the measurement

variation, making the control system “more capable.”

The simplest form of the capability index (C p ) compares the variation of a variable around its mean to the difference between its upper (USL) and lower (LSL) specification limits, where s equals the standard deviation of the measurement variation:

C p = (USL – LSL) / 6ó

This form can be applied to a variable whose set point is midway between the high and low limits and whose distribution around its mean is normal.

The concept of a “minimum variance controller” provides an absolute reference for quantifying control performance. Under this concept, there is a theoretical minimum closed-loop CV variance, achievable only by a controller that uses perfect models of the process response to manipulated and disturbance variables. This and other assumptions make it impossible to implement such a controller, but the size of this minimum variance is a useful absolute reference for any real controller, whose variance will be larger.

The Harris Index has been defined to quantify a performance ratio based on this variance. It is defined as:

H.I. =ó2 MVC / ó2 ACT

ó2 MVC is the theoretical minimum possible variance, calculated from the process dead time, andó2 ACT is the variance achieved by a real controller. Normalizing this ratio creates an index that varies in the range of 0 to 1.0:

ç=1 – (ó2 MVC / ó2 ACT )

As control improves, the variance achieved by the real controller approaches the theoretical minimum and this index approaches 0.0. Condition monitoring packages commonly include calculation of this index.

Perhaps the biggest limitation of this concept is that it only considers the variance of the controller measurement in assessing control performance. The concept does not penalize the amount of valve movement required to reduce CV variance. In an industrial plant, where loops can interact and pass disturbances to other loops and units, large MV variations can be unacceptable. This can be an important consideration in evaluating control performance. Overall stability is often more important that the performance of an individual control loop.

Economic performance

If performance testing is being done for internal purposes, then testing control performance may be the proper choice. The various metrics for control performance give the most practical indications of how a system is behaving now and how to make it behave better.

But testing control performance does not answer the question asked by every performance-based contract: Does the new control system actually make money for the plant?

As before, the first step is identifying indicators for economic performance. Key performance indicators (KPIs) are most often used for this purpose.

A KPI can be any indicator that reflects successful performance by any organization, with any purpose. It is applicable to economic and control performance. The only requirements are that the indicator be specific and quantifiable, and that it measures achievement of an organizational goal.

Typically, KPIs evaluate long-term measures of performance, so they reflect a wide range of operating conditions. This characteristic distinguishes the concept from dynamic performance measurement (DPM), which presents indicators of instantaneous performance measures in real time.

In measuring the economic performance of control systems, a KPI (and DPM) will typically put a value on one of the economic categories discussed in the prior article in this series ( CE , March 2006, p. IP1):

Higher production rate and/or product value;

Lower raw material feed rate (higher yield); and

Lower energy consumption per unit of production (higher efficiency).

Measurements of these benefits in economic terms are the ultimate proof of performance for an advanced control system. Invariably, the comparison is relative—measuring improvement provided by the operation of the control system, compared to the performance under the prior control system.

Moving target: Improvement

An advanced control project typically begins with a study and evaluation phase to identify and estimate potential benefits that justify project cost. This will be followed by contract negotiation phase, a design and engineering phase, and an installation and a commissioning phase before the application is fully functional. This may be followed by a training and integration phase before finally getting to performance testing.

Time required for an advanced control project will vary with system size and complexity. It will also be affected by issues such as production and maintenance schedules and other commitments for user and vendor personnel. It can be anywhere from a few weeks to more than a year.

In that time, many changes can impact the original justification and the control system’s ability to meet or exceed improvement goals. These include:

Process equipment, instrumentation, or actuators that affect operating efficiencies and capacities;

Product specifications, market demand, or market price;

Feed and/or fuel characteristics, availability, or costs;

Upstream and/or downstream operations or equipment; and

Ambient (seasonal) conditions.

Often, comparing performance of a new system under new conditions to the historical performance of the old system under old conditions is not a valid comparison. Changes in plant and market conditions will favor one system or the other and distort the comparison of their economic performances.

Valid testing

The only rigorous way to demonstrate benefits provided by a new control system is an on-off test under current plant and market conditions. This means that the final configuration has to allow for operating both the old and the new control schemes. Then a planned test can switch back and forth between the systems frequently enough and long enough to provide data for a statistically valid comparison of their economic performance metrics.

The prior article in this series showed how the economic benefits provided by an advanced control system can be calculated from changes in production rate, product value, and operating costs. During an on/off test, data will be collected and averaged to measure changes in these parameters. Proof of performance is based on the benefit gained from the differences between these metrics for the old and new systems.

To be statistically valid , two questions must be answered:

How should the process be tested to ensure that the metrics calculated for the operation of each system accurately represents the performance of the old and new systems?

Given the difference in the test results metrics, what is the certainty that this difference accurately represents the operation over all time?

These questions arise because the values of statistical parameters calculated from a finite set of data can’t be assumed to be equal to the values for operation over all time. This is a classic question in statistical analysis. How can you be sure that an average calculated from a finite set of data is a valid estimate of the average for all possible samples?

There is no certain answer. The best that can be done is to take enough independent samples, such as conducting a long-enough test, to ensure that the error is within a specified limit to a specified probability.

Central limit theorem

A basic principle of statistics, the central limit theorem, makes this possible. If data is collected from a number of different test runs, each run constitutes a set of samples from the set of all possible samples. If each run is long enough, distribution of their calculated averages will center on, and spread around, the (unknown) true mean in a normal form. This is true even if the individual data sets do not have normal distributions.

Further, the standard deviation of this distribution is inversely related to the number of samples in each run:

ó M =ó / [n]1/2where

ó M = standard deviation of the distribution of run means;

ó = standard deviation of the distribution of all sample values; and

n = number of samples in each test run.

From this information and the area properties of normal distributions, it is possible to calculate the number of samples necessary to estimate a performance mean to a specified probability.

Assume, for example, that a control system has been holding a variable to a variation whose standard deviation is five units. For normal distributions, there is a 95% probability that any individual value will be within 1.96 standard deviations of the population mean. If it is desired to collect enough data so that there is a 95% probability that a calculated performance mean will be within 0.5 units of its true value, then:

(0.5/1.96) = 5 /[n]1/2

where

n = 385

If the interval between independent samples is 5 minutes, a minimum of 32 hours of test data will be required for this level of certainty.

With this information, a test plan can be developed for a series of on/off states during operations when process disturbances are active. Operating data is collected in each state until the necessary data have been obtained. Time in each state should be short enough so that each scheme sees the normal variation in operating conditions. However, it must also be long enough for the control systems to respond to the disturbances and restore stability.

A transition time must be allowed following each state change to achieve a stable beginning point for resuming data collection. Otherwise, the system that performs more poorly will steal credit from the better performing one.

The figure “State transition performance penalties” shows this concept for comparison between advanced process control (APC) and basic regulatory control (BRC) systems.

If data collection begins at the transition boundaries, the average across the APC on-state will be drawn down by the lower readings, while APC controls move the process to its optimum operating point. Similarly, averages across BRC state operations will be raised by higher readings as the process returns to its operating point under the previous control system.

Guidelines for designing a statistically valid test have been developed.

Use a sampling interval large enough so that successive readings indicate real process change, rather than simple noise or redundant reading of the same conditions; this ensures that the samples are independent of each other;

Set the minimum state time = 3x (longest disturbance settling time);

Set the maximum state time = minimum time + longest interval between process disturbances;

Set the transition time = longest disturbance settling time or set-point change settling time, whichever is greater;

Set the length of each state = a random variation between the minimum and maximum state times; and

Generate a random sequence of on/off states, to avoid biasing the results toward either.

After the test is complete, statistically valid averages will be available for the operation of each system.

The second question can be addressed using the same principles. Since the distribution of each performance mean will be normal, the statistical techniques of interval estimation can examine area properties of these normal distributions and generate a probability estimate for the validity of their difference.

For example, assume the following results from the test of an APC control system.

In this example, a combination of better contro l and optimization has reduced the variation in a controlled variable from 3 to 2 and raised its average from 70 to 80 units over the test period.

Although the test data showed a difference of 10 between the operating means during testing, this improvement can’t be guaranteed for all operations. But the number of samples is sufficient to ensure a 95% probability that the difference between the actual means is at least 9.76 units, across the range of operating conditions covered by the test.

When this level of discipline has been applied to a performance test, the vendor and the user can reasonably agree that the performance benefit achieved during the test proves the ongoing performance that can be expected from the advanced control system.

Schedule, budget

Done properly, performance testing is a significant effort in itself. If the contract for an advanced control project includes performance testing, a realistic project schedule and cost estimate must include the necessary time and resources. Otherwise, any testing will be incomplete and the results will be statistically inconclusive. The importance of statistical validity vs. the cost of the required testing should be addressed in contract negotiation.

Including ability to switch back and forth between the old and new control systems adds cost and complexity to the final system configuration. To avoid disturbing operations, the transfer has to be bumpless. This may not always be possible, especially if the project involves significant hardware changes like switching from manual or analog controls to digital control.

Formal testing requires a plan defining the schedule and duration of each state, including transition periods, and requirements for operating periods when valid data can be collected. It should include a means for adding operator notes that can identify and clarify conditions that affect collected data. The switching and testing procedures will require operator training.

Example test data

APC off
APC on

Test mean
70
80

Test standard deviation
3
2

Number of samples
300
300

Statistical confidence in the difference in means

% Confidence level
Minimum difference in actual means

99
9.66

98
9.70

95
9.76

90
9.81

Author Information

Lew Gordon is a principal application engineer at Invensys; . This is the ninth article in his process control series. To read them all, search on “Lew” atop and look under “articles.”