Autonomous systems need a degree of trust to work

The Defense Advanced Research Projects Agency (DARPA) unveiled a research program called Assured Autonomy that aims to advance the ways computing systems can learn and evolve as well as emphasizing the need for trust in autonomous systems as they become more common.

By Gregory Hale, ISSSource September 24, 2017

Building on breakthroughs in autonomous cyber systems and formal methods, the Defense Advanced Research Projects Agency (DARPA) unveiled a research program called Assured Autonomy that aims to advance the ways computing systems can learn and evolve. Goals of the program are to better manage variations in the environment and enhance the predictability of autonomous systems like driverless vehicles and unmanned aerial vehicles (UAVs).

"Tremendous advances have been made in the last decade in constructing autonomy systems, as evidenced by the proliferation of a variety of unmanned vehicles," said Sandeep Neema, program manager at DARPA. "These advances have been driven by innovations in several areas, including sensing and actuation, computing, control theory, design methods, and modeling and simulation. In spite of these advances, deployment and broader adoption of such systems in safety-critical DoD applications remains challenging and controversial."

The Defense Science Board Report on Autonomy, released in 2016, emphasizes the need for autonomous systems to have a strong degree of trust. Assuring systems operate safely and perform as expected is integral to trust, especially in a military context, the report said. But systems must also be designed so operators can determine whether a system, once it has been deployed, is operating reliably, and, if not, can take appropriate action.

Assured Autonomy aims to establish trustworthiness at the design stage and incorporate sufficient capabilities so inevitable variations in operational trustworthiness can be measured and addressed appropriately.

"Historically, assurance has been approached through design processes following rigorous safety standards in development, and demonstrated compliance through system testing," Neema said. "However, these standards have been developed primarily for human-in-the-loop systems, and don’t extend to learning-enabled systems with advanced levels of autonomy. The assurance approaches today are predicated on the assumption that the systems, once deployed, do not learn and evolve."

One approach to assurance of autonomous systems that recently has garnered attention, particularly in the context of self-driving vehicles, is based on the idea of "equivalent levels of safety," i.e., the autonomous system must be at least as safe as a comparable human-in-the-loop system that it replaces.

The approach compares known rates of safety incidents of manned systems—such as the number of accidents per thousands of miles driven—and conducts physical trials to determine the corresponding incident rate for autonomous systems. Studies and analyses indicate, however, that assuring safety of autonomous systems in this manner alone is prohibitive, requiring millions of physical trials, perhaps spanning decades.

Simulation techniques have been advanced to reduce the needed number of physical trials, but offer very little confidence, particularly with respect to low-probability, high-consequence events.

In contrast to prescriptive, process-oriented standards for safety and assurance, a goal-oriented approach, such as the one espoused by Neema, arguably is more suitable for systems that learn, evolve, and encounter operational variations.

In the course of the Assured Autonomy program, researchers will aim to develop tools that provide foundational evidence that a system can satisfy explicitly stated functional and safety goals, resulting in a measure of assurance that can also evolve with the system.

Gregory Hale is the editor and founder of Industrial Safety and Security Source (ISSSource.com), a news and information Website covering safety and security issues in the manufacturing automation sector. This content originally appeared on ISSSource.com. ISSSource is a CFE Media content partner. Edited by Chris Vavra, production editor, CFE Media, cvavra@cfemedia.com.

Go Online

Key concepts 

  • The Defense Advanced Research Projects Agency (DARPA) is trying to advance the ways computing systems can learn and evolve.
  • Assuring systems operate safely and perform as expected is integral to trust.
  • Process-oriented standards for safety and assurance are more suitable for systems that learn, evolve, and encounter operational variations.

Consider this

What else needs to be done to ensure a certain level of trust?

Original content can be found at www.isssource.com.