Merging Mom’s Perceptive Power with Technology Creates Startling Results

KEYWORDS Process and advanced control Neural networks Multivariable sensors Sensing/measurement Analyzers What do the FBI (Federal Bureau of Investigation) and your mother have in common? Both use real measurements to reach an inferred outcome; are you telling the truth? Think about it, your mother would do a subtle interrogation all the while observing your body language—pupil dilation,...

By Dave Harrold, CONTROL ENGINEERING November 2, 2018

KEYWORDS

Process and advanced control

Neural networks

Multivariable sensors

Sensing/measurement

Analyzers

What do the FBI (Federal Bureau of Investigation) and your mother have in common? Both use real measurements to reach an inferred outcome; are you telling the truth?

Think about it, your mother would do a subtle interrogation all the while observing your body language—pupil dilation, dry mouth, fidgety body actions, voice tone, etc.—in an attempt to determine if you were telling the truth. Each time the two of you had one of these “truth-seeking moments,” it seemed as though mom could reach the correct conclusion faster and faster.

Today we realize that what went on between you and your truth-seeking mother was the development, training, and retraining of a sophisticated system in mom’s brain designed especially for you. She learned your body-language and used these as inputs, determined the most important few, applied appropriate weighting to each, and reduced the inputs to a single output—truth or lie? As you matured, you refined your body language. To compensate, mom retrained her “truth/lie” model and again determined the correct output; it just took a bit longer.

FBI agents use similar techniques when they “hook” a person to a polygraph (lie detector), conduct an interview that begins with baseline questions followed by questions about a particular topic or incident, and conclude with a polygraph expert interpreting the results.

Inside our own bodies, similar inferential measurements are developed. For example, very early in life we learn fire is hot just by seeing fire. We don’t have to feel the fire’s heat or smell the smoke, we “instinctively” know fire is hot.

Several decades ago scientist and mathematicians developed sophisticated mathematical equations capable of mimicking portions of our brains, but it wasn’t until computer price/performance ratios reached an affordable level that practical applications became feasible.

Where is soft sensing?

Six or eight years ago, control and automation suppliers jumped on the soft-sensor bandwagon, and, as is often the case with new technology, soft sensors were misapplied and/or failed to produce the expected results. That caused soft-sensor technology to fall out of favor among much of the control and automation user community.

Fortunately, enough soft-, virtual-, or inferential-sensor successes were achieved to prevent the technology from disappearing altogether. Today, companies like General Mills (Minneapolis, Minn.) and Owens Corning Glass (Toledo, O.) are benefiting from deployment of soft-sensor technology.

Not long ago General Mills began complementing off-line analyzers with soft-sensors to infer a food product’s moisture content—a key flavor indicator. Too little drying leaves cereal sticky; over-drying wastes money (in the form of unnecessary production time and energy usage), making the cereal more fragile and prone to crumbling during transportation.

Applying soft-sensor technology from Aspen Technology (Cambridge, Mass.), General Mills reduced run cycles, lowered energy usage, reduced product waste, and improved product consistency and quality. General Mills reportedly implemented the soft sensor in about two weeks and realized a three-month return on investment.

Owens Corning Glass had an even trickier problem to solve. The R-value, a measurement of a material’s ability to resist the flow of heat, is the attribute of interest when producing building insulation materials. The challenge was that the materials’ R-value was unavailable for at least six weeks after production.

Owens Corning operators had learned that slowing the production line and increasing glass fiber mass augmented a product’s R-value; but the ideal line speed and fiber mass remained a mystery, thus Owens Corning was incurring unnecessary raw material expense and less-than-optimum production rates.

Then, the company’s engineers investigated and determined an R-value soft sensor could be developed using existing production line measurements.

Moisture and R-value are only two soft-sensor application examples. Soft sensors are also being successfully used to predict critical-to-performance property estimations such as polymer melt-index, NOx, CO, pulp freeness, and naphtha 95% cut point.

Easier isn’t less capable

A consistent complaint of early soft-sensor adopters was the “crudeness” of off-line analysis tools. Numerous process engineers threw up their hands in surrender; those who persevered found themselves learning far more about soft-sensor theory and data analysis techniques than they really wanted to know.

Here’s the good news, in the soft-sensor market, several suppliers have expended considerable resources to improve the usability of off-line development tools without sacrificing capability. In fact, several suppliers admit the underlying soft-sensor algorithms haven’t significantly changed in the past several years, but they have made huge improvements in making soft-sensor development easier, and are producing models with better quality than “experts” developed just a few years ago.

What makes soft sensors useful is the ability of the underlying model to be trained to infer a physical property measurement otherwise available only from analyzers or laboratory testing. But keep in mind, most soft sensors use artificial neural network (ANN) technology, which is reliable only when the application remains in the region used to train the model.

That means the soft sensor needs to be trained with data that accurately represent what the process is doing when things are performing well. When the model is trained with a mix of steady-state and upset condition data, the model’s ability to accurately predict an output is seriously compromised.

Soft-sensor basics

Most soft-sensor applications end up being nonlinear and thus use ANN technology to predict the parameter of interest.

Linear applications and those where significant correlations exist between measured variables require models be developed that avoid violating the correlations. “Partial least squares” and “principal component analysis” techniques are frequently used to satisfy such application needs and are included in some soft-sensor product offerings.

Regardless of the underlying soft-sensor technology required, development of a soft sensor requires following five distinct steps: data collection, data preprocessing, variable and time delay selection, network training, and network verification.

1. Get the info

Data collection is essential since quality data are the only base for building a quality soft-sensor model.

Too frequently, first-time soft-sensor developers believe they can “feed” the analysis software mountains of historical data and let it figure out what’s important, but it’s a bit more involved than that.

It’s not so much the volume as the information content of the data. For example, problems in soft-sensor training include failure to consider time lags caused by different measurement points, varying flow rates, and data collected using different time bases. Also, because laboratory data are frequently used to compare the accuracy of the models’ output, it’s important to know when a sample is taken, how long it takes before the sample is analyzed, and when results are recorded. Why? Because materials, whose properties can change from “sitting around,” require accurate time tracking to ensure data integrity.

Collecting data is critical to model quality, but how is “usable” data defined and what constitutes a sufficient amount of data?

Usable data changes over the normal intended prediction range of the soft sensor, but data collected during periods of upsets, grade changes, etc., are not usable data.

The general rule-of-thumb for the quantity of data needed is a few hundred samples of usable data for each input variable to be considered.

Fisher-Rosemount’s DeltaV based neural network module provides a good example of how suppliers are making soft-sensor development easier.

A process engineer could sort through multiple databases for input and output data and then reconcile and correlate data to identify when the process was running well. Instead, Fisher-Rosemount’s approach defines the neural network module, assigns a block of up to 20 variables from anywhere in the system as inputs, assigns a manual laboratory or online analyzer data collection function, and downloads everything into the controller.

The presence of the module in the controller automatically initiates variable collection in the DeltaV historian for inputs and outputs. As lab analysis results become available, they are entered into the system and are automatically aligned with corresponding input data, making it easier to identify usable data.

2. Clean the input

Data preprocessing identifies the variables to include in soft-sensor training and is another place where tool improvements combined with process knowledge often reap surprising results and benefits.

Occasionally one or two variables will have more or less influence than anyone realizes. That’s when the quality and effectiveness of off-line data analysis tools really show their stuff—not only in the sophistication of data analysis algorithms, but how understandable the results are in identifying outliers and bad or questionable data.

Different tools use different techniques to achieve this all-important data-preprocessing step. The most common data analysis technique is to submit data to multiple passes of an algorithm designed to identify the most influential variables. Usually these algorithms can identify three to six key variables that most influence the soft sensor’s ability to accurately predict an output.

As data pass through the algorithm, the software identifies missing values, outliers (data values outside established control limits) and/or undesired data from different sources. Several tools provide user-friendly means for process engineers to peruse data: highlighting sections of data to include, exclude, interpolate, or replace by the mean and then generate input and output files of only time synchronized usable data.

When evaluating soft-sensor tools, pay special attention to preprocessing activities and the results produced. Look for easily understood indications, including the number of outliers excluded near defined limits, and the quantity of data analyzed, removed, interpolated, replaced, and remaining.

3. Refine the data

Variable and time delay selection identifies the inputs most influencing the accuracy and robustness to predict an output. But soft-sensor developers seldom know in advance how much each input influences the predicted output, therefore a good toolkit permits setting a time delay range and includes algorithms to automatically test the sensitivity of each input variable to each output variable.

Once critical-to-performance variables are identified, the second part of this step is to approximate deadtimes and further evaluate each variable’s input-to-output sensitivity. Most off-line tools provide capability to conduct analysis automatically and/or permit users to manually apply process knowledge. Either way, the intent is to evaluate the sensitivity over time of each input variable to each output variable. When an input, at a given time delay contributes significant sensitivity in predicting the output, the input at that delay is important and should remain a part of the model.

Sensitivity and time delay graphical results should indicate how much each input influences the predicted output over the specified time delay range. However, model training should use input data a specific delays, therefore sensitivity and delay testing is an iterative process.

Each time, testing removes irrelevant variables and reduces the time delay range. Repeat the test until the most influential input variables and corresponding delays are identified.

4. Weight and training

Network training determines the number of hidden neurons and automatically adjusts their weighting based on well-conditioned training data.

The training tool presents information to the model, compares the output to a target value, automatically modifies an input’s weighting, and repeats the training cycle until an acceptable output is achieved.

Model training tools sometimes tend to over-train the model, thus bypassing the “best-fit” solution. To avoid over-training, soft-sensor tools provide dynamic graphical presentations of modeling progress and allow the user to cancel training upon achieving an acceptable best-fit solution.

5. Test the model

Network verification is where the developer’s process knowledge and use of the development tools really show up.

Using a completely different data set from that used to train the model, verify the model’s ability to predict the critical-to-performance output variable. A good verification indicates the soft sensor is ready to go online. A less than satisfactory verification may require starting over, but at a minimum requires a close examination of each development step.

Admittedly, early soft-sensor development and maintenance was difficult, required users to learn too much about the underlying technology. Some failed to address things like compensating for changes in unmeasured inputs, but that’s all changed.

Today soft sensors are mainly found on continuous processes, but as soon as process engineers realize how much soft-sensor analysis and development tools have improved, the number of installed and working soft sensors will expand into other process types.

Perhaps it’s time to take another look where soft sensors could reduce product variability, add to the company’s bottom line, and get coverage in the company newsletter. After all, mom would be proud to hang news of your achievements on the refrigerator door.

For more soft-sensor suppliers, go to www.controleng.com/buyersguide; for more info, use the following circle numbers, online at www.controleng.com/freeinfo:

Aspen Technology www.aspentech.com…200

Dot-Products www.dot-products.com…201

Fisher-Rosemount www.easydeltav.com…202

Honeywell www.iac.honeywell.com…203

MathWorks www.mathworkks.com…204

Matrikon www.matrikon.com…205

Pavilion www.pavtech.com…206

PSE Optima www.pseoptima.com…207

What to look for in soft-sensor development tools

Numerous tools for developing artificial neural network models exist in the market place. Some are generic; others are designed for plant/shop floor processes. A good soft-sensor tool should:

Easily map inputs and outputs to real-time process variables;

Combine robust data manipulation and analysis capabilities with easy-to-understand results;

Include generalization techniques and quantitative results to ensure model quality and avoid model over-training;

Create accurate, robust models that reliably and repeatedly predict overall potential data inputs; and

Yield models in a reasonably short development time.

A ProActive solution approach

Occasionally a class of process analysis and improvement technology comes along that doesn’t fit neatly into articles about soft sensors or advanced process control (See CE , Feb. ’01, p. 63), mainly because the technology doesn’t use models.

Stocho’s (Schenectady, N.Y.) ProActive Process Improvement (PPI) system is a good example, because PPI achieves real-time, online process improvements using design of experiment (DOE) principles.

Similar to other optimization applications, PPI is a hardware/software solution that sits over and integrates with traditional process control systems.

Beginning a PPI application is similar to developing a soft sensor. Critical-to-performance variables are identified, usable data are collected and cleaned, and critical control variable correlations identified. But that’s where similarities end.

Instead of training a model, adjustment limits are established for each manipulated variable. The PPI system begins monitoring the defined critical-to-performance variables and outputs, evaluating factors influencing the outputs, and making online changes to the process, following DOE rules in an orderly attempt to locate and maintain the processes operational ” sweet spot.”

Like its model-based competitor products, PPI includes an extensive library of statistical process control charts including Xbar and range, Xbar and sigma, min/max, and several more.

For more information about PPI, circle 208 or visit www.stochos.com.

Natural neurons inspire artificial neural networks

In the simplest scenario, natural neurons conduct impulses from receptor organs-eyes, ears, nose, etc.-eventually reaching effector organs, such as muscles. Impulses arrive at neuron input location (dendrites) and are weighted or based on learned importance. Weighted inputs are summed and results compared to the neurons threshold value. When the sum is equal to or greater than the threshold, an output is produced. The state and condition of the nervous system influences output quality. For example, fatigue, oxygen deficiency, alcohol, and medications can influence normal passage of impulses and modify output strength.

This biology of natural neurons inspired computer-based artificial neural network (ANN) technology. Fortunately (or unfortunately) present computer-based ANN technology bears only a modest resemblance to our natural neuron brainpower inspiration, where each neuron can interact directly with 10,000 other neurons, yielding a total of 1015 connections.


Related Resources