Appropriate automation: Human system dynamics for control systems
Technology Update: Distilling a basic control problem to its essence can show what cannot be automated, which is induction. Automation can facilitate induction; induction cannot (yet) be automated.
Distilling a basic control problem to its logical essence can shed light on what cannot be automated. The answer, in a word, is induction. And while induction can be facilitated by appropriate automation, it cannot (yet) be automated.
How automation relates to process control
Process control is usually construed to include architectures and algorithms for maintaining the output of a specific industrial process within a desired range. Automation, in this context, concerns the implementation of the control process inside a machine. Examining the control theory of large complex processes can determine what cannot be automated in process control applications. A canonical model can be applied at any useful level of abstraction of process control systems. The question of "correctness" can illustrate why this approach is essential.
Paul Fitts, a well-known U.S. Air Force experimental psychologist, said in 1953 that humans are better than machines at induction, and machines are better at deduction. Machines, in fact, can deduce circles around humans. Humans, on the other hand, can induce circles around machines, which can't induce at all (at least not yet). John Boyd, a well-known fighter pilot, echoed the sentiment when he introduced the OODA loop (1975). OODA stands for observe, orient, decide, and act [which is very close to the control loop: sense, decide, actuate]. And physicist David Deutsch has rekindled the discussion in his recent article in Aeon Magazine. So, perhaps the most fundamental questions to be addressed in the course of engineering a control system are:
- Where are deduction and induction in the control problems?
- How can the machine's powers of deduction be harnessed to facilitate human induction?
The concept of hybrid intelligence can provide a perspective on complex human-machine systems with intent to address these questions explicitly, though the meaning here differs from the term as used in computer science.
In large complex systems, such as national infrastructure and defense systems, the human component accounts for more than half of the lifecycle cost of the system (GAO Report, 2003), and those costs are increasing. Yet, even though human-system integration (HSI) is an increasingly important component in systems engineering, few systems engineers are exposed to the issues of human-system integration in academic training.
The International Council on Systems Engineering (INCOSE) defines system engineering as "an interdisciplinary approach and means to enable the realization of successful systems. Successful systems must satisfy the needs of its [sic] customers, users, and other stakeholders ... the systems engineer often serves to elicit and translate customer needs into specifications that can be realized by the system development team ... and supports a set of lifecycle processes beginning early in conceptual design and continuing throughout the life cycle of the system through its manufacture, deployment, use, and disposal" (Haskins, 2011).
In case anyone slept through Psychology 101, here's the missing part: There is no such thing as a fully automated system. All systems engineered serve some human purpose, and they all have a human-system interface. In every definition of process control, you will find the term "limit" or a synonym. Where does the limit come from? It arises from the intended purpose of the process control system.
That purpose and human system interface are often implicit and unspecified, yet they are crucial to the success of any process control system, even those with very little actual human contact.
Fundamental cognition research has interesting (perhaps even disturbing) implications for the future of control engineering that have only recently begun to be recognized (cf Helbing, 2013).
Consider a quintessential process control problem: the national electric power grid infrastructure. The Smart Grid is a U.S. government program designed to incentivize conversion of the electric power grid control infrastructure from primarily electromechanical devices to so-called intelligent systems, which are software-controlled components designed to protect components of a national network. These new components are capable of communicating among themselves and with the supervisory control and data acquisition (SCADA) systems and their human operators at major control centers.
The emerging Smart Grid system may, in fact, be much more vulnerable to catastrophic failure than the obsolescent electromechanical system that it will replace. Two insights from this research help illustrate the point:
1. The Smart Grid will not behave according to the laws of physics. The old grid behaves largely in accordance with Kirchoff's and Ohms' (established, physical) Laws. In contrast, the Smart Grid will not behave according to established laws of physics. Rather, it will exhibit dynamical properties that are largely unknown. Its failure modes will follow (as-yet unknown) logical structures and will not be bound by physical structures.
2. When it fails, the Smart Grid will implode at the speed of light. Failures in the old grid propagate at an estimated 300 miles per second, limited by the time required for the electromechanical devices to detect and react to the environment. This usually provides sufficient time for SCADA operators to recognize, diagnose, and react to contain the outage and minimize damage to the grid. Failures in the Smart Grid will propagate at near the speed of light across a largely random logical network, becoming part of the operating environment of other systems connected logically rather than physically.
Complex systems, designs
The prospect of a catastrophic failure in the Smart Grid can be glimpsed in the infamous lights-out incident in the Superdome in New Orleans on Feb. 4, 2013 (Palmer, 2013). The failure occurred in a newly refurbished and upgraded power distribution system in the Superdome in which a "design defect" was identified as the proximal cause-in a device that has been in production and use since the "early 1990s." The control components were apparently configured to function as specified in the documentation, and there were seemingly no conventional human operator errors. This "design defect" was undocumented and observed in all similar devices installed there.
In short, the new system did exactly what was expected of it, and the lights went out. This is an example of what human factors specialists call latent human error: error in the system that remains after all the other sources of error have been removed. Error that was, it turns out, designed into the system. Latent human error emerges in those situations in which the dynamical properties of the system are not sufficiently understood by the operators, often because they are not even understood by the system's designers.
Emergence in complex systems is not a new phenomenon. It reveals itself in numerous ways, but one of the most challenging is in the realm of latent human error. For example, HSI specialists talk about design-induced human error-error that occurs because some property of the system didn't become evident until humans interacted with it in its natural (or simulated) real-world context.
Latent human errors are examples of emergent properties-those unexpected interactions (behaviors) among components and their environments. Cognition is an emergent property.
Model-based system engineering
In effect, the Smart Grid and other complex grids currently in development represent a new kind of cognitive system, one for which there are scant few design principles. Model-based system engineering (MBSE), an initiative of INCOSE, is really our best hope for identifying and predicting the behavior of the massive infrastructure systems under construction.
MBSE is the only approach appropriate to the issues at hand. Mathematical models and statistical data do not exist to allow prediction of the behavior of these new cognitive systems. The emergent properties of the systems that arise from interactions can only be exposed to observation and analysis through some sort of dynamical generation.
MBSE is not new. There are many research and test systems, some illustrated in Figure 1, being used in research programs looking at exactly these issues (Smullen and Harris, 1988; Jones, Plott, Jones, Olthoff, and Harris, 2011). But this approach is not yet used extensively in the design and development of components of the next generation of the Smart Grid infrastructure.
MBSE: Predictive limits
Recent research (Aleo, Martinez, and Valverde, 2013, and Doyle and Csete, 2011) has raised the specter that even the biggest and best computer models have fundamental flaws that limit the ability to predict behavior. Research by several labs (cf Mortveit and Reidys, 2008) illustrates that even a very simple network can exhibit disjoint phase spaces. Mathematics to analyze and predict the behavior of simple systems do not yet exist, let alone large complex systems.
In other words, the only way to do this is to generate the dynamics and observe what happens. More elegantly, simulate, and test the systems with human cognition in the loop.
- Events & Awards
- Magazine Archives
- Digital Reports
- Global SI Database
- Oil & Gas Engineering
- Survey Prize Winners
- CFE Edu