Learning Transversal Project in the Axis AI at LaBRI – Université de Bordeaux
Head: Jenny Benois-Pineau (IS)
Members: Pascal Desbarats(IS), Marie Berton-Aimar(IS), Jean-Pierre Salmon (IS), Henri Nicolas(IS), Akka Zemmari (CombAlgo), Nathanaël Fijalkow (MF), Romain Bourqui (BKB), Romain Giot (BKB), Meghyn Bienvenu(MF) associated member, Laurent Simon (MF)
The recent focus of AI and Pattern Recognition communities on the supervised learning approaches, and particularly on Deep Learning / AI, resulted in considerable increase of performance of AI systems, but also raised the question of the trustfulness and explicability of their predictions for decision-making.
Instead of developing and using Deep NNs as black boxes and adapting known architectures to variety of problems, the goal of explainable Deep Learning / AI is to propose methods to “understand” and “explain” how these systems produce their decisions. AI systems may produce errors, can exhibit overt or subtle bias, may be sensitive to noise in the data, and often lack technical and judicial transparency and explicability. These shortcomings raise many ethical and policy concerns that impede wider adoption of this potentially very beneficial technology. In various AI application domains such as health, ecology, autonomous driving cars, security, culture it is mandatory to understand how the predictions are correlated with the information perception and decision making by the experts.
This research subject is very recent, but already several axes can be distinguished:
- “Sensing” or “salient features” of Neural Networks and AI systems – explanation of which features for a given configuration yield predictions both in spatial (images) and temporal (time-series, video) data; – This topic is very strongly researched by information visualization community.
- Attention mechanisms in Deep Neural Networks and their explanation;
- For temporal data, the explanation of which features and at what time are the most prominent for the prediction and what are the time intervals when the contribution of each data is important;
- How the explanation can help on making Deep learning architectures more sparse (pruning) and light-weight;
- When using multimodal data how the prediction in data streams are correlated and explain each other;
- Automatic generation of explanations / justifications of algorithms and systems’ decisions;
- Decisional uncertainly and explicability
- Evaluation of the explanations generated by Deep Learning and other AI systems.