Making AI ready for safety-critical applications

Artificial intelligence (AI) should make production more flexible while also automating logistics and quality control processes, but safety concerns remain.

By Fraunhofer IPMS June 1, 2021

The expectations could hardly be higher: Artificial intelligence (AI) should make production more flexible, plan maintenance with foresight and optimize the flow of goods, while also automating logistics and quality control processes. “In fact, numerous promising AI algorithms and architectures have been developed over recent years – including at Fraunhofer IPA – such as computer vision, human-machine interfaces and networked robotics, for example,” said Xinyang Wu from the Center for Cyber Cognitive Intelligence at Fraunhofer IPA.

Practical application is the only thing missing. “There is a chasm between research and application. Industry has proven to be quite sluggish in implementing new AI applications. They are regarded as not reliable enough for safety-critical applications.”

Wu is aware of user reservations: “When we speak with our industrial partners, it quickly becomes clear that the companies only really want to use autonomous and self-learning robots, for example, if they function absolutely reliably and when you can say without any doubt that the machines pose zero risk to humans.”

It is precisely this which has been impossible to validate so far. There are neither norms nor standardized tests. Wu said these are urgently needed.

“The target has to be to make decisions taken by algorithms certifiable and transparent. For example, traceability must be guaranteed: When a machine independently makes decisions, then I have to be able – in retrospect at least – to work out why it has made an error in a certain situation. Only in this way, we can make sure that such a mistake is not repeated. Black-box models, which do not allow humans to trace algorithm-based decision paths, are from our perspective not directly suited to being used in safety-critical applications – unless the model has been certified by the correct method.”

But how could humans ensure the safety of Artificial Intelligence? The Fraunhofer IPA team for Cyber Cognitive Intelligence has now proposed a strategy aimed at resolving this issue and reported on the current status of the relevant technology in its white paper “Dependable AI – Using AI in safety-critical industrial applications.” The strategy is based on certification and transparency.

Criteria catalogue to improve safety

“Generally speaking, the focus is first and foremost on finding rules that help us to evaluate the reliability of machine learning and the AI processes associated with this”, according to Wu. This research resulted in the establishment of five criteria that AI systems should meet in order to be regarded as safe:

  • All algorithm-based decisions must be explainable for humans.
  • The functionality of the algorithms must be reviewed using formal verification methods prior to being used.
  • Moreover, statistical validation is required, particularly in cases where formal verification is not suited to certain application scenarios due to scalability issues. This can be checked by test runs with larger amounts of data or unit volumes.
  • The uncertainties on which the decisions of neural networks are based must also be determined and quantified.
  • During operation, the systems must be monitored, for example by using online monitoring processes. The important thing here is recording input and output – i.e. sensor data and the decisions made on the back of evaluating this.

Wu points out that the five criteria could form the basis for standardized checks in the future: “At IPA, we have already compiled various algorithms and methods for each of these points, which will allow us to empirically review the reliability of AI systems. We have even carried out checks of this kind for some of our customers already.”

Transparency creates trust

The second basic prerequisite for safe use of AI systems is that they are transparent. In line with the ethical guidelines of the High-Level Expert Group on Artificial Intelligence of the European Commission (HLEG AI), this is one of the key elements for the realization of trustworthy AI. In contrast to the criteria that can be used to check reliability at algorithmic level, this transparency relates exclusively to human interaction at systematic level. Based on the HLEG AI guidelines, there are three points that transparent AI must fulfill:

  1. The decisions made by the algorithms must be traceable.
  2. It must be possible for a person to explain these decisions at a full level of human understanding.
  3. AI systems must communicate with a human to let them know what the algorithm is capable of, including tasks that are beyond its capabilities.

“Users will only trust AI – no matter if it’s being used in road traffic settings or manufacturing factories – when it is possible to test the reliability of self-learning, autonomous AI systems with standardized processes, also taking into account ethical considerations,” Wu said. “When this trust is in place, the chasm between research and application will be narrowed.”

– Edited from a Fraunhofer press release by CFE Media.