Artificial intelligence for control engineering

Robotics, cars, and wheelchairs are among artificial intelligence beneficiaries, making control loops smarter, adaptive, and able to change behavior, hopefully for the better. University of Portsmouth researchers in the U.K. discuss how AI can help control engineering, in summary here. Below, see 7 AI-boosting breakthroughs, and online, see more examples, trends, explanations, and references in a 15-page article. Link to a 2013 article explaining how “Artificial intelligence tools can aid sensor systems.”

02/19/2015


This is an architecture for a fuzzy logic-based controller. Courtesy: University of PortsmouthAs has happened at many other times in our engineering careers, members of the research team at Portsmouth found themselves struggling yesterday with two common problems in control engineering. As usual, some inner control loops had to behave in a predictable and desired manner with strict timing, but the outer loops had to decide what those predictable inner loops were to do; they had to provide setpoints (or reference points) or reference profiles for the controllers in the inner loop to use as inputs. Can artificial intelligence (AI) be useful in this sort of control engineering problem?

- - -
Editor's note: This online version greatly expands the summary print version, adding sections on Artificial intelligence tools; Analog vs. digital; unbounded vs. defined; Intelligent machines, so far; 12 key AI applications; 4 ways AI advances computer ability; Could AI turn against us?; Improving decision quality; More for AI before it is scary; Smarter, with benefits; Right tools for the right task; and Virtual world. Also see references and related links.
- - -

Inner control loops

Control engineering is all about getting a diverse range of dynamic systems (for example, mechanical systems) to do what you want them to do. That involves designing controllers. The inner loop problems we faced yesterday were about creating a suitable controller for a system or plant to be controlled. That is interfacing systems to sensors and actuators (in this case wheelchair motors and ultrasonic sensors for obstacle avoidance) and the problems could be solved using computer interrupts, timer circuits, additional microcontrollers, or simple open loop control.

Inner control loops are similar to autonomic nervous systems in animals. They tend to be control systems that act largely unconsciously to do things like regulate heart and respiratory rate, pupillary response, and other natural functions.

This system is also the primary mechanism in controlling the fight-or-flight responses first described by Walter Cannon (1929 & 1932); it primes an animal to fight or flee (Jansen et al., 1995; Schmidt & Thews, 1989). The autonomic nervous system has two branches: the sympathetic nervous system and the parasympathetic nervous system (Pocock, 2006). The sympathetic nervous system is a quick response mobilizing system, and the parasympathetic is a more slowly activated dampening system. In our case that was similar to quickly controlling the powered wheelchair motors and more slowly monitoring sensor systems for urgent reasons to quickly change a motor input. 

Automatic control

When a device is designed to perform without the need of conscious human inputs for correction, it is called automatic control. Automatic control systems were first developed over 2,000 years ago, an example being the ancient Ktesibios's water clock in Egypt (third century BC). Since then, many automatic control devices have been used over the centuries, older ones often being open-loop and more recent ones often being closed.

Examples of relatively early closed-loop automatic control devices that used sensor feedback in an inner loop include the temperature regulator of a furnace attributed to Drebbel in about 1620 and the centrifugal fly ball governor used for regulating the speed of steam engines by James Watt in 1788. Most control systems of that time used governor mechanisms, and Maxwell (1868) used differential equations to investigate the control systems dynamics for the systems. Routh (1874) and Hurwitz (1895) then investigated stability conditions for the systems. The idea was to use sensors to measure the output performance of a device being controlled so that those measurements could be used to provide feedback to input actuators that could make corrections toward some desired performance. 

Feedback controllers

Feedback controllers began to be created as separate multipurpose devices, and Minorsky (1922) invented the three-term or PID control at the General Electric Research Laboratory while helping install and test some automatic steering on board a ship. PID controllers have regularly been used for inner loops ever since, and these feedback controllers were used to develop ideas about optimal control in the 1950s and '60s. The maximum principle was developed in 1956 (Pontryagin et al., 1962), and dynamic programming (Bellman 1952 & 1957) laid the foundations of optimal control theory. That was followed by progress in stochastic and robust control techniques in the 1970s. The design methodologies of that time were for linear single input, single output systems and tended to be based on frequency response techniques or the Laplace transform solution of differential equations. The advent of computers and need to control ballistic objects for which physical models could be constructed led to the state-space approach, which tended to replace the general differential equation by a system of first order differential equations. That led to the development of modern systems and control theory with an emphasis on mathematical formulation. 

Adaptive control

Recent controllers were traditionally electrical or at least electromechanical, but in 1969-1970, the Intel microprocessor was invented by Ted Hoff, and since then the price of microprocessors (and memory) has fallen roughly in line with Moore's Law, which stated that the number of transistors on an integrated circuit would double roughly every two years. That all made the implementation of a basic feedback control system in an inner loop more trivial, and more recent systems have used robust control and then adaptive control. Adaptive control does not need a priori information and has parameters that vary, or are initially uncertain. For example, as an aircraft flies, its mass will slowly decrease as a result of fuel consumption, or in our case, a powered wheelchair user may become more tired as the day goes on. In these cases a low level control law is needed that adapts itself as conditions change. 

Outer control loops

Inner control loops need inputs. They need reference points or profiles. These were originally just a set value. For example, the input to Ktesibios's water clock was a desired level of water, and the input to Drebbel's temperature regulator was a specific temperature value. As the inner loops became more reliable and could largely be left unattended (although often monitored), attention shifted to control loops that sit outside of them.

Where inner control loops are similar to autonomic nervous systems in animals, outer control loops are similar to their brains (Kandel, 2012). That is, they tend to be more conscious and less automatically predictable. Even flatworms have a simple and clearly defined nervous system with a central system and a peripheral system that includes a simple autonomic nervous system (Cleveland et al., 2008).

The brain is the higher control center for functions such as walking, talking, and swallowing. It controls our thinking functions, how we behave, and all our intellectual (cognitive) activities, such as how we attend to things, how we perceive and understand our world and its physical surroundings, how we learn and remember, and so on. In our case that was similar to deciding where a wheelchair user wanted to go and what the user might want to do, and whether or not control parameters should be adjusted because of them. In both cases, the more complicated higher level of control in the outer loops can only work if the lower level control in the inner loops acts in a reasonably predictable and repeatable fashion.

Originally, control engineering was all about continuous systems. Development of computers and microcontrollers led to discrete control system engineering because communications between the computer-based digital controllers and the physical systems were governed by clocks. Many control systems are now computer controlled and consist of both digital and analog components, and key to advancing their success is unsupervised and adaptive learning.

But a computer can do many things over and above controlling an outer loop to produce a desired input or inputs for some inner loops. Many people believe that the brain can be simulated by machines and because brains are intelligent, simulated brains must also be intelligent; thus machines can be intelligent. It may be technologically feasible to copy the brain directly into hardware and software, and that such a simulation will be essentially identical to the original (Russell & Norvig, 2003; Crevier, 1993).

Computer programs have plenty of speed and memory, but their abilities only correspond to the intellectual mechanisms that program designers understood well enough to put into them. Some abilities that children normally don't develop until they are teenagers may be in, and some abilities possessed by two-year-olds are still out (Basic Questions, 2015). The matter is further complicated because cognitive sciences still have not succeeded in determining exactly what human abilities are. The organization of the intellectual mechanisms for intelligent control can be different from that in people. Whenever people do better than computers on some task, or computers use a lot of computation to do as well as people, this demonstrates that the program designers lack understanding of the intellectual mechanisms required to do the task efficiently. Or perhaps the task can be done better in a different way.

While control engineers have been migrating from traditional electromechanical and analog electronic control technologies to digital mechatronic control systems incorporating computerized analysis and decision-making algorithms, novel computer technologies have appeared on the horizon that may change things even more (Masi, 2007). Outer loops have become more complicated and less predictable as microcontrollers and computers have developed over the decades, and they have begun to be regarded as AI systems since John McCarthy coined the term in 1955 (Skillings, 2006). 

Artificial intelligence tools

AI is the intelligence exhibited by machines or software. AI in control engineering is often not about simulating human intelligence. We can learn something about how to make machines solve problems by observing other people, but most work in intelligent control involves studying real problems in the world rather than studying people or animals.

AI can be technical and specialized, and is often deeply divided into subfields that often don't communicate with each other at all (McCorduck, 2004) as different subfields focus on the solutions to specific problems. But general intelligence is still among the long-term goals (Kurzweil, 2005), and the central problems (or goals) of AI include reasoning, knowledge, planning, learning, communication, and perception. Currently popular approaches in control engineering to achieve them include statistical methods, computational intelligence, and traditional symbolic AI. The whole field is interdisciplinary and includes control engineers, computer scientists, mathematicians, psychologists, linguists, philosophers, and neuroscientists.

There are a large number of tools used in AI, including search and mathematical optimization, logic, methods based on probability, and many others. I reviewed an article about seven AI tools in Control Engineering that have proved to be useful with control and sensor systems (Sanders, 2013); they were knowledge-based systems, fuzzy logic, automatic knowledge acquisition, neural networks, genetic algorithms, case-based reasoning, and ambient-intelligence. Applications of these tools have become more widespread due to the power and affordability of present-day computers, and greater use may be made of hybrid tools that combine the strengths of two or more of them. Control engineering tools and methods tend to have less computational complexity than some other AI applications, and they can often be implemented with low-capability microcontrollers. The appropriate deployment of new AI tools will contribute to the creation of more capable control systems and applications. Other technological developments in AI that will impact on control engineering include data mining techniques, multi-agent systems, and distributed self-organizing systems.

AI assumes that at least some of something like human intelligence can be so precisely described that a machine can be made to simulate it. That raises philosophical issues about the nature of the mind and the ethics of creating AI endowed with some human-like intelligence, issues which have been addressed by myth, fiction, and philosophy since antiquity (McCorduck, 2004). But how do we do it? Well, mechanical or formal reasoning has been developed by philosophers and mathematicians since antiquity, and the study of logic led directly to the invention of the programmable digital electronic computer. Turing's theory of computation suggested that a machine, by shuffling symbols as simple as "0" and "1", could simulate any conceivable act of mathematical deduction (Berlinski, 2000).

This, along with discoveries in neurology, information theory, and cybernetics, inspired researchers to consider the possibility of building an electronic brain (McCorduck, 2004).

Computers can now win at checkers and chess, solve some word problems in algebra, prove logical theorems, and speak, but the difficulty of some of the problems that have been faced, surprised everyone in the community. In 1974, in response to criticism from Lighthill (1973) and ongoing pressure from the U.S. Congress, both the U.S. and British governments cut off all undirected exploratory research in AI, leaving only specific research in areas like control engineering.

Analog vs. digital; unbounded vs. defined

The brain (of a human or of a fruit fly) is an analog computer (Dyson, 2014). It is not a digital computer, and intelligence may not be any sort of algorithm. Is there any evidence for a programmable digital computer evolving the ability to take initiative or make choices which are not on a list of options programmed in by a human anyway? Is there any reason to think that a digital computer is a good model for what goes on in the brain? We are not digital machines. Turing machines are discrete state / discrete time machines while we are continuous state / continuous time organisms.

We have made advances with continuous models of neural systems as nonlinear dynamical systems, but in all these cases the present state of the system tends to determine the next state of the system, so that next state is entailed by the laws programmed into the computer. There is nothing for consciousness to do in a digital control system as the current state of the system suffices entirely for the next state.

In the coming decades, humanity may create a powerful AI but, in 1999, I suggested that machine intelligence was just around the corner (Sanders, 1999). It has all taken longer than I thought it would—and there has been frustration along the way (Sanders 2008)—but what is the story so far?

Intelligent machines, so far

It was probably the idea of making a "child machine" that could improve itself by reading and learning from experience that began the study of machine intelligence. That was first proposed in the 1940s, and after World War II, a number of people independently started to work on intelligent machines. Alan Turing was one of the first, and after his 1947 lecture, Turing predicted that there would be intelligent computers by the end of the century. Zadeh (1950) published a paper entitled "Thinking Machines-A New Field in Electrical Engineering," and Turing (1950) discussed the conditions for considering a machine to be intelligent that same year. He made his now famous argument that if a machine could successfully pretend to be human to a knowledgeable observer then it should be considered intelligent.

Later that decade, a group of computer scientists gathered at Dartmouth College in New Hampshire (in 1956) to consider a brand-new topic: artificial intelligence. It was John McCarthy (now a professor at Stanford) who coined the name "artificial intelligence" just ahead of that meeting. That debate served as a springboard for further discussion about ways that machines could simulate aspects of human cognition. An underlying assumption in those early discussions was that learning (and other aspects of human intelligence) could be precisely described. McCarthy defined AI there as "the science and engineering of making intelligent machines" (McCarthy, 2007; Russell & Norvig, 2003). The attendees at Dartmouth, including John McCarthy, Marvin Minsky, Allen Newell and Herbert Simon, then became the leaders of AI research for decades.

By the late 1950s, there were many researchers in the area, and most of them were basing their work on programming computers. Minsky (head of the MIT Laboratory) predicted in 1967 that "within a generation the problem of creating 'artificial intelligence' will be substantially solved" (Dreyfus, 2008). Then, the field ran into unexpected difficulties around 1970 with the failure of any machine to understand even the most basic children's story. Machine intelligence programs lacked the intuitive common sense of a four-year-old, and Dreyfus still believes that no one knows what to do about it.

Now (nearly 60 years after that first conference), we still have not managed to create a "child machine." Programs still can't learn much of what a child learns naturally from physical experience.

But, we do appear to be at a point in history when our human biology appears too frail, slow, and over-complicated in many industrial situations (Sanders, 2008). We are turning to powerful new control technologies to overcome those weaknesses, and the longer we use that technology, the more we are getting out of it. Our machines are exceeding human performance in more tasks. As they merge with us more intimately, and we combine our brain power with computer capacity to deliberate, analyze, deduce, communicate, and invent, then many scientists are predicting a period when the pace of technological change will be so fast and far-reaching that our lives will be irreversibly altered.

A fundamental problem, though, is that nobody appears to know what intelligence is. Varying kinds and degrees of intelligence occur in people, many animals, and now some machines. A problem is that we cannot agree what kinds of computation we want to call intelligent. Some people appear to think that human-level intelligence can be achieved by writing large numbers of programs of the kind that people are writing now or by assembling vast knowledge bases of facts in the languages now used for expressing knowledge. However, most AI researchers now appear to believe that new fundamental ideas are required, and therefore it cannot be predicted when human-level intelligence will be achieved (McCarthy, 2008). 

12 key AI applications

Machine intelligence does combine a wide variety of advanced technologies to give machines an ability to learn, adapt, make decisions, and display new behaviors. This is achieved using technologies such as neural networks (Sanders et al., 1996), expert systems (Hudson, 1997), self-organizing maps (Burn, 2008), fuzzy logic (Dingle, 2011), and genetic algorithms (Manikas, 2007), and we have applied that machine intelligence technology to many areas. Twelve AI applications include:

  1. Assembly (Gupta et al., 2001; Schraft and Ledermann, 2003; Guru et al., 2004)
  2. Building modeling (Gegov, 2004; Wong, 2008)
  3. Computer vision (Bertozzi, 2008; Bouganis, 2007)
  4. Environmental engineering (Sanders, 2000; Patra 2008)
  5. Human-computer interaction (Sanders, 2005; Zhao 2008)
  6. Internet use (Bergasa-Suso, 2005; Kress, 2008)
  7. Medical systems (Pransky, 2001; Cardso, 2007)
  8. Robotic manipulation (Tegin, 2005; Sreekumar, 2007)
  9. Robotic programming (Tewkesbury, 1999; Kim, 2008)
  10. Sensing (Sanders, 2007; Trivedi 2007)
  11. Walking robots (Capi et al., 2001; Urwin-Wright 2003)
  12. Wheelchair assistance (Stott, 2000; Pei, 2007). 

4 ways AI advances computer ability

There appear to be some technologies that could significantly increase the practical ability of computers in these areas (Brackenbury, 2002; Sanders, 2008). Four ways AI helps computing.

  1. Natural language understanding to improve communication.
  2. Machine reasoning to provide inference, theorem-proving, cooperation, and relevant solutions.
  3. Knowledge representation for perception, path planning, modeling, and problem solving.
  4. Knowledge acquisition using sensors to learn automatically for navigation and problem solving.

Where are we going with machine intelligence? At one end of the spectrum of research there are handy robotic devices, such as iRobot's Roomba vacuum cleaners and more personal robots such as the conversation character robots and Zeno robot-boy from Hanson Robotics, and Pleo, from Ugobe. These new "toy" robots could be a beginning for a new generation of ever-present, cheap robots with new capabilities. At another end of the spectrum, direct brain-computer interfaces and biological augmentation of the brain are being considered in research laboratories (along with ultra-high-resolution scans of the brain followed by computer emulation).

Some of these investigations are suggesting the possibility of smarter-than-human intelligence within some specific application areas. However, smarter minds are much harder to describe and discuss than faster brains or bigger brains, and what does "smarter-than-human" actually mean? We may not be smart enough to know (at least not yet). 


<< First < Previous 1 2 3 4 Next > Last >>

Charles , MT, United States, 02/26/15 12:49 PM:

An excellent wide ranging and informative article.

IMHO, there is no kind of computation that can be considered “intelligent.” It is true that computation, among many other methods of reckoning, can be performed by intelligent humans, but computation, is not, in itself, “intelligence.” Computation is a procedure in which the locations and combinations of symbols are managed. Humans performed these procedures for the purpose of deciphering encrypted messages. Alan Turing showed that some of those (more tedious) tasks could be done automatically by simple machines. The import of any series of symbol manipulations still had to be determined by a human agent.

My forthcoming book, Natural Logic of Space and Time (NL) may shed some light on one part of the AI puzzle. NL can embody a temporal sensitivity in machines that is not mediated by frame-matching or complex arithmetical procedures that rely on time stamps. In small systems, and for critical functions in large systems, it can provide more immediate temporal “smarts” as an alternative to computation, microprocessors, and software.

I can send you a preview .pdf on request to:
reactor@reactivelogicsystems.com
Kiril , Non-US/Not Applicable, Bulgaria, 02/26/15 12:49 PM:

Contemporary industrial systems have great potential for AI implementation. Very actual paper facing the opportunities for intelligence development within the area of control engineering.
PETER , Ontario, Canada, 03/14/15 03:21 PM:

Great article!
The Engineers' Choice Awards highlight some of the best new control, instrumentation and automation products as chosen by...
The System Integrator Giants program lists the top 100 system integrators among companies listed in CFE Media's Global System Integrator Database.
The Engineering Leaders Under 40 program identifies and gives recognition to young engineers who...
This eGuide illustrates solutions, applications and benefits of machine vision systems.
Learn how to increase device reliability in harsh environments and decrease unplanned system downtime.
This eGuide contains a series of articles and videos that considers theoretical and practical; immediate needs and a look into the future.
Sensor-to-cloud interoperability; PID and digital control efficiency; Alarm management system design; Automotive industry advances
Make Big Data and Industrial Internet of Things work for you, 2017 Engineers' Choice Finalists, Avoid control design pitfalls, Managing IIoT processes
Engineering Leaders Under 40; System integration improving packaging operation; Process sensing; PID velocity; Cybersecurity and functional safety
This article collection contains several articles on the Industrial Internet of Things (IIoT) and how it is transforming manufacturing.

Find and connect with the most suitable service provider for your unique application. Start searching the Global System Integrator Database Now!

SCADA at the junction, Managing risk through maintenance, Moving at the speed of data
Flexible offshore fire protection; Big Data's impact on operations; Bridging the skills gap; Identifying security risks
The digital oilfield: Utilizing Big Data can yield big savings; Virtualization a real solution; Tracking SIS performance
click me