SAiEX - Safe Artificial Intelligence with integrated explainable integrity level

Exploring ways to better understand decisions made by artificial intelligence

Background

The increased use of deep learning techniques in the last years has led to major breakthroughs in the fields of computer vision, pattern recognition and machine learning. The complex structure of artificial neural networks that is due to the enormous number of autonomously learned parameters (in the case of image processing often more than 10 million) makes it difficult to gain detailed insight into the underlying learning processes. A more complex model is usually able to make more accurate classifications and predictions but also complicates efforts to pin down what was learned and why a decision was actually made. However, transparency in decision-making is especially important for safety-critical applications as autonomous driving.

Project Content and Goals

The goal of this project is to make the decision processes of object recognition methods used for autonomous driver assistance systems transparent and comprehensible. For this purpose, we investigate the suitability of methods from the field of Explainable AI (XAI), adapt them and integrate them into state-of-the-art object recognition methods for the first time. The findings of the project contribute to the development of trustworthier and more reliable assistance systems for autonomous driving.

Methods and Results

In the current project, we focus on explainability methods that give insight into the learning processes of AI. We investigate how these methods can be integrated into complex object detectors and derive measures that not only allow a qualitative check and visualization of the detection results but also help to spot misclassifications in near real-time. This facilitates the identification of functional errors and the development of measures to avoid them, which, in turn, leads to a more reliable detection of objects in road traffic.

You want to know more? Feel free to ask!

Head of
Media Computing Research Group
Institute of Creative\Media/Technologies
Department of Media and Digital Technologies
Location: A - Campus-Platz 1
M: +43/676/847 228 652
Partners
  • Eyyes GmbH
Funding
FFG Basisprogramm
Runtime
08/01/2019 – 11/30/2021
Status
finished
Involved Institutes, Groups and Centers
Institute of Creative\Media/Technologies
Research Group Media Computing