Mathematical models help find process spill values, material savings

Designing controls: Sharper attention to spill values created 20% additional production capacity and an estimated $500,000 annually in material costs in one application.

By Timothy S. Matheny and Phillip W. Michel February 2, 2016

Control engineers use ever more capable automation controllers to provide better process control, including replacing simpler past approaches with more sophisticated mathematical algorithms with batch process material deliveries.

More accurate material deliveries are obtained when the control system compensates for lag in the control and other effects by instructing a delivery to shut down just prior to reaching the target amount. The difficult question is, "How early?" This value is sometimes called a "pre-act value" or a "spill value." 

Controlling spill

The best way to control spill, the amount of material that is delivered to the process after the delivery was told to stop, is through good design of the process system. Components used for delivering material, including especially the instruments chosen to measure the delivery, must be applied properly. Unfortunately, design decisions are often made prior to the process automation supplier becoming involved in the project or are made for reasons other than improving the process control.

Material delivery accuracy can be often improved through "dribbling" or slowing down the rate of delivery as the amount of material delivered approaches the target. Dribbling controls spill by reducing it. When to start dribbling can be calculated similarly to the approaches to calculating spill prediction described below. Dribbling increases batch times, lowering production capacity. It can adversely affect the accuracy of flow instruments, so it is generally best used with scale instruments.

When spill can’t be controlled, it must be managed. Spill is managed by predicting and accounting for it. When spill can be accurately predicted, material delivery accuracy can be improved. 

Simple spill prediction

Perhaps the simplest approach is to use a set value for the predicted spill. Set values can be determined based on experience with operating the system and do not change unless manually changed. This approach is immune to anomalies but not at all reactive to changing process or material conditions.

A similarly simple approach is to use the measured spill from a delivery as the predicted spill for the next delivery, without analysis or calculation. Such a simple approach is very much like using only proportional control for a loop—very reactive, but erratic and prone to upset from anomalies. However, it can work well when spill is very consistent.

Averaging the last variable, n, measured spills, instead of simply using the immediately previous value, helps to smooth over the effect of delivery anomalies, but does not totally resolve all issues. Delivery anomalies continue to have a reduced effect on the calculated or predicted spill value. When n is large, anomalies affect the result value less, but the model is not reactive to real process variations. The programmer’s choice of n must balance these two issues.

Variations on simple averaging include weighing more recently measured values more heavily. Weighing values is another way to make the mathematical model more reactive.

Eliminating data anomalies

In fact, no mathematical model that uses all data will be unaffected by anomalies. An effective model must use some method to eliminate anomalies from consideration.

A simple approach to determining anomalies is to consider all values that vary significantly from the average of a previous set of values as anomalies. The obvious question becomes, "How much variation is to be considered significant?" An answer is to define any out-of-spec delivery as an anomaly. Another is to define values that fall outside another fixed range or outside of a range defined around the value set average as anomalies. Or, the range that defines anomalies might be based on the range of the set rather than on the average.

Any approach to eliminating anomalies is an improvement over only using the simple approaches described previously.

In a custom programmed solution, a programmer may use any number of these approaches—or none of them—as might be required to meet the customer specification. Individual formulas may benefit from any number of empirically determined factors.

Relying on statistical properties of the previous measured spill value data set, not on arbitrary or empirically determined factors, makes for a more reusable mathematical model.

More useful calculations

A more useful approach to calculating a predicted spill value starts with collecting a data set of the most recent measured spills, usually about 10. After sorting the values, this approach determines the range of values between the first and third quartiles, the third and eighth value in a set of 10. The quartile values are offset by the range to determine the high and low "fences" outside of which values are considered anomalies. All values within the "fences" are averaged to calculate the new predicted spill value. 

For a set of 10 values, V 1 -V 10 , where G is a gain factor that is less than 1 to tighten the fences and greater than 1 to open the fences, the low (F l ) and high (F h ) fences are calculated as follows—

F l = V 3 – [V 8 – V 3 ] * G, F h = V 8 + [V 8 – V 3 ] * G

Deliveries that are less than some multiple of the predicted spill value are ignored as the delivery system does not achieve a steady state prior to shut down. No recalculation of the predicted spill value is done. Additional logic in the model handles boundary conditions, such as when there is only one value, or very few values, in the data set.

It is important that anomalies are not discarded from the data set; the fences prevent them from being included in calculation of the predicted spill. If what presents initially as an anomaly is really a process change, subsequent, similar, measured spills will open the second through third quartile range and start including the new values in the average. The predicted spill value will shift in the direction of the newer points, reacting to the real process change as it should. Spill values are often calculated per route; however, some routes deliver multiple materials. A route that originates at a bulk sack dispenser can deliver any number of materials. 

Improved deliveries, real money

Applying these methods in a bakery control system involved only software replacement; the control system and all instrumentation were retained. The old control software relied on dribbling combined with a set predicted spill value to achieve its material delivery accuracy. The bakery was tracking material delivery accuracy closely, particularly on relatively expensive flavoring ingredients.

More accurate material delivery always improves quality and saves money, but few applications know how much. A software-only upgrade offered the opportunity to measure results of this mathematical model in isolation.

By eliminating dribbling, thereby reducing batch times, the bakery gained 20% additional production capacity. After running the new system several weeks, the user estimated more accurate material deliveries would save $500,000 annually in material costs, based on actual deliveries remaining in excess of but closer to their minimum specifications. System integration and software charges were approximately $100,000. 

Math, beyond accuracy, creates value

Variations in material delivery amounts and times can indicate hidden process problems. A total process automation solution might collect and analyze material delivery accuracy and material delivery durations for additional analysis. Contextualizing data by route and material is critical to drawing valid conclusions. A change over time in either or both of the measured spill values or the standard deviation may indicate a need for maintenance, preventing a catastrophic failure or an out-of-spec batch.

A mathematical model that determines and handles anomalies will produce better results than one that doesn’t. Relying on statistical properties of the previous measured spill value data set, not on arbitrary or empirically determined factors, makes for a more reusable mathematical model.

Powerful controllers can deliver significant additional lifecycle value when that new power is used appropriately. More sophisticated mathematical models and algorithms are one way of harnessing that power to produce more value.

Timothy S. Matheny, PE, is president of ECS Solutions Inc., a Control Systems Integrators Association certified member and 2016 Control Engineering and Plant Engineering System Integrator of the Year.

Phillip W. Michel is a senior system engineer for ECS Solutions and lead developer of S88 Builder, a model-based basic process control software from ECS Solutions. Edited by Mark T. Hoske, content manager, CFE Media, Control Engineering, mhoske@cfemedia.com.

More advice

Key concepts

  • Mathematical modeling helps with spill control.
  • Avoiding spills can save significant resources.
  • Return on investment can be 3 months or less in some applications. 

Consider this

Control software that uses smarter process algorithms can add significantly to the bottom line. 

ONLINE extra

Learn more from ECS Solutions, a 2016 System Integrator of the Year

ECS Solutions provides more information about S88 Builder.

See related tutorial article on mathematical modeling for process control linked below.