Diagnosing faults in engineering models
Method of minimal evidence techniques help identify invalid modeling assumption variables.
Effective model-based control strategy depends on creating an accurate model of the process. Without this most basic capability, the process will not run properly. So when faults arise, it is important to determine if they are actual operational problems or an incorrect modeling assumption. This overview describes a general methodology for creating optimal model-based process fault analyzers. It is called MOME (method of minimal evidence). MOME is a diagnostic strategy based upon the evaluation of engineering models describing normal operation of the target process system with sensor data. This methodology uses the minimum amount of diagnostic evidence necessary to discriminate uniquely between an invalid modeling assumption variable (e.g., an assumption which assumes the absence of a particular process fault situation) and all other valid modeling assumption variables.
Moreover, it ensures that the resulting fault analyzer will always perform competently and optimizes the diagnostic sensitivity and resolution of its diagnoses. Diagnostic knowledge bases created with this methodology are also conducive for diagnosing many multiple fault situations, for determining the strategic placement of process sensors to facilitate fault analysis, and for determining the shrewd distribution of fault analyzers within large processing plants. It has been demonstrated to be competent in both an adipic acid plant formerly owned and operated by DuPont in Victoria, Texas, and an electrolytic persulfate plant owned and operated by FMC in Tonawanda, New York.
Effective operation of these persulfate processes requires extremely strict control of the solution chemistry. Maintaining tight windows of the strengths and pH across all the various cation and anion species throughout is thus critical. This in turn requires highly accurate and reliable measurements of these variables and methods for verifying those accuracies. Consequently, automated model-based process fault analysis provides a useful tool for guaranteeing which of these measurements are currently correct and immediately flagging those which are not. This automated detection and alerting capability allows FMC to operate its DCSs in full auto-pilot mode more frequently.
While MOME can be set up using a home-grown system, it has been converted into a patented fuzzy logic reasoning algorithm and is now fully automated in a program called FALCONEER IV. Once this platform became available, FMC found that converting its original hand-compiled FMC ESP fault analyzer into the format necessary for automatically generating the diagnostic logic required only coding the existing 30 primary models and performance equations describing that process. Creating, coding, and analyzing more than 30 primary models and five performance equations for the FMC LAP process required approximately two person-weeks of effort in order to derive a fully functional and validated process fault analyzer. FMC has independently documented some of the benefits derived from these two applications. All FALCONEER IV applications can diagnose all single fault situations at all possible levels of diagnostic resolution and all non-interactive and also almost all possible pairs of interactive multiple fault situations.
MOME diagnostic strategy
Model-based reasoning is a highly systematic and powerful means for deriving plausible hypotheses as to the causes of abnormal process behavior. The first step in developing competent model-based fault analyzers is to derive a set of as many linearly independent models of normal process operation as possible. These should accurately describe the behavior of the target process system during its malfunction-free (i.e., normal) operation. These models include the normal operating characteristics of the process system components, the functional relationships between those components, the process control strategy, and the underlying fundamental conservation, thermodynamic, and physio-chemical principles. The set of modeling assumptions required to derive these models defines the domain in which they predict normal process behavior.
The diagnostic evidence generated by evaluating these models with actual process data is then compared with the expected patterns of the model behavior during various possible fault situations, i.e., the SV&PFA (sensor validation and proactive fault analysis) diagnostic rules, which can logically discriminate between the various possible process operating events within the fault analyzer’s intended scope. The specific patterns of diagnostic evidence used for this discrimination depend entirely upon the specific model-based diagnostic strategy actually employed. Model-based fault analyzers in a nutshell are thus computer programs that determine which of their SV&PFA diagnostic rules most closely match currently observed process behavior. Their understanding of process fault situations is thus completely determined by their underlying models of normal operation and the consequent SV&PFA diagnostic rules for identifying those fault situations.
However, the SV&PFA diagnostic rules we utilize are automatically generated by our MOME fuzzy logic algorithm compiler and are thus always logically correct. Consequently, since the diagnostic evidence used by those diagnostic rules is determined directly from evaluating its underlying process models, the fault analyzer's performance is inherently circumscribed completely by the understanding of normal process behavior represented within those models.
The main advantage
The MOME diagnostic strategy’s chief advantage for developing optimal diagnostic rules arises from its choice of patterns of possible satisfied and violated process model residuals used to identify plausible fault hypotheses. Only those models considered relevant for each potential process fault are contained in the associated diagnostic rules. MOME consequently identifies all possible assumption variable deviations consistent with the current pattern of diagnostic evidence generated by the most recently sampled process sensor data. Perfect resolution between different process fault hypotheses is thus not always possible or is possible only at larger magnitudes of the specified fault (i.e., at a magnitude sufficient enough to violate all so affected relevant model residuals).
This is the classic trade-off between timely fault detection and correctly identifying the underlying fault(s). Trading lower diagnostic resolution for higher diagnostic sensitivity allows the fault analyzer to narrow down the potential process faults that could be occurring currently into a reasonable number of plausible explanations for the current process state that can then be further checked out by the process operator to determine the actual fault present. This directly flags potential incipient fault situations sooner rather than waiting until the fault’s magnitude is severe enough to allow unique classification. This is why the methodology is called the method of minimal evidence: all plausible fault situations are diagnosed whenever even just one of the linearly independent set’s model residuals indicates abnormal process operation.
MOME employs model-based reasoning to deduce the cause or causes of abnormal process behavior. It does so with the least amount of diagnostic evidence necessary to diagnose the various possible fault situations uniquely. Moreover, the resulting fault analyzer always makes competent diagnoses at the best resolution and highest sensitivity possible for the given magnitude of the fault(s) occurring.
A key feature of this method is in the way in which the various patterns of diagnostic evidence are selected. This selection fully utilizes all of the information contained within the available diagnostic evidence, especially the estimates of the fault magnitudes of linear assumption variable deviations inherent in the violated model residuals. The strategy followed in this selection relies upon default reasoning: all but one of the fault hypotheses (if perfect resolution is possible) being supported by some of the diagnostic evidence contained within a given pattern of diagnostic evidence is systematically shown to be implausible by some of the other evidence also contained within that pattern.
Thus, by default, the remaining fault hypothesis is the only plausible explanation of the full pattern of relevant diagnostic evidence. Using default reasoning in this manner allows this diagnostic strategy to base each fault diagnosis upon the least amount of diagnostic evidence necessary for that proper diagnosis. This directly allows many other potential multiple fault situations to be properly diagnosed.
The various issues described throughout this article are discussed in much greater detail in Optimal Automated Process Fault Analysis, Richard J. Fickelscherer and Daniel L. Chester, © John Wiley & Sons, Inc., 2013. This material is reproduced with permission of John Wiley & Sons, Inc.
Dr. Richard J. Fickelscherer, PE, is a partner at Falconeer Technologies, LLC, Williamsville, N.Y. Reach him at falconeertech(at)verizon.com. Dr. Daniel L. Chester is currently the associate chair of the department of computer and information sciences at the University of Delaware. He is also a co-founder of Falconeer Technologies, LLC. Reach him at chester(at)cis.udel.edu.
- An effective model-based process control strategy depends on having an effective process model.
- Analyzing faults to the root cause is using this analysis strategy can yield a more reliable result with less information required.
- Additional information on this technique is available online.
Case Study Database
Get more exposure for your case study by uploading it to the Control Engineering case study database, where end-users can identify relevant solutions and explore what the experts are doing to effectively implement a variety of technology and productivity related projects.
These case studies provide examples of how knowledgeable solution providers have used technology, processes and people to create effective and successful implementations in real-world situations. Case studies can be completed by filling out a simple online form where you can outline the project title, abstract, and full story in 1500 words or less; upload photos, videos and a logo.
Click here to visit the Case Study Database and upload your case study.