Seminar on AI
Presentation
The main seminar is organized each two weeks. It aims to gather people working on different aspects of Artificial Intelligence from theory to practice. This includes Machine and Deep Learning, Symbolic AI, Knowledge Compilation, Constraint Programming, Explainable AI, Optimization, …
Practical Informations
On Thursday, 11h-12h at LaBRI
Contact: Akka Zemmari, Laurent Simon
Program
18/01/2024, Mariza Ferro (UFF Brésil): Sustainable Artificial Intelligence for Extreme Weather Prediction in Urban Areas.
Abstract: Extreme weather events, including heavy rainfall, have become more frequent and severe due to climate change, causing material damage and loss of life. In Brazil, where rapid and disorderly urbanization has forced low-income populations to occupy unfavorable geological areas, landslides and floods caused by extreme rainfall have resulted in significant casualties and property damage. Nowcasting, a few hours ahead forecast for extreme rainfall events, is an essential component of early warning systems and consecutive actions within crisis management and risk prevention. However, accurate prediction of such events remains a challenge for forecasting models. Research has shown that artificial intelligence (AI) can improve the predictive accuracy for extreme weather events. However, if not designed with sustainable criteria, the development of AI models can lead to intensive computational resource use and energy consumption. Energy savings are crucial to reducing environmental impacts and mitigating global warming.
In this talk Professor Mariza will present the ongoing research and results for developing an energy-efficient AI model to predict extreme rainfall events in urban areas which involves many challenges related to the integration of a wide range of observational data, how to dynamically and quickly adapt and generalize the pipeline to different regions of the Rio de Janeiro state and to provide solutions for supporting the efficient execution of these AI models, enhancing the time to solution, accuracy, and energy consumption.
07/12/2023, Meghna Ayyar (LaBRI): Incremental Learning with Move-To-Data Continual Approach.
Abstract: In real-world supervised learning, training data is often unavailable simultaneously, requiring models to adapt to incoming information. Sequential or naive training of pre-trained models on new tasks can lead to "forgetting" of prior knowledge. Incremental learning methods aim to adapt models to new data while retaining past knowledge. Focusing on the streaming scenario, where data arrives one sample at a time, our "Move-to-Data" method selectively adjusts network weights without systematic gradient descent. Compared to the state-of-the-art ExStream, our approach outperforms and learns significantly faster, presenting a promising solution for efficient and effective continual learning.
Joint work with Jenny Benois-Pineau and Akka Zemmari.
23/11/2023, Guillaume Lagarde (LaBRI): Scaling Neural Program Synthesis with Distribution-based Search.
Abstract: In this talk, we will discuss the problem of automatically constructing computer programs from input-output examples, especially when the target language is domain-specific and defined using a context-free grammar. I will introduce a theoretical framework called distribution-based search, discuss its challenges, and present several search strategies based on learning the weights of a probabilistic context-free grammar (PCFG) and then using this PCFG to enumerate the most promising candidate programs efficiently.
The presentation will be based on the following paper published at AAAI'2022: https://arxiv.org/abs/2110.12485
Joint work with Nathanaël Fijalkow, Théo Matricon, Kevin Ellis, Pierre Ohlmann, Akarsh Potta
19/10/2023 (room 178), Nathanaël Fijalkow (LaBRI): PEPR IA
Previous Years
01/06/2023 (room 76): Luc Pommé-Cassierou (LaBRI, BKB). H²O, a model and class-agnostic method to explain image classification predictions.
Abstract: In the field of eXplanable Artificial Intelligence (XAI), saliceny map approaches are very popular to explain the prediction for image classification tasks. In this presentation, a new hierarchical occlusion-based model and class-agnostic method is proposed to compute a saliency map that shows the importance of each pixel of an image in the prediction. A comparison to existing methods is provided with both visual examples, existing and new evaluation metrics to demonstrate the effectiveness of the approach.
09/03/2023 : Courtney Ford (University College Dublin, Ireland). Explaining Classifications to Non-Experts: An XAI User Study of Post-Hoc Explanations for a Classifier When People Lack Expertise
Abstract: Very few eXplainable AI (XAI) studies consider how users’
understanding of explanations might change depending on whether they know more/less about the to-be-explained domain (i.e., whether they differ in their expertise). Yet, expertise is a critical facet of most high-stakes, human decision-making (e.g., understanding how a trainee doctor differs from an experienced consultant). Accordingly, this paper reports a novel, user study (N=96) on how people’s expertise in a domain affects their understanding of post-hoc explanations-by-example for a deep-learning, black-box classifier. The results show that people’s understanding of explanations for correct and incorrect classifications changes dramatically, on several dimensions (e.g., response times, perceptions of correctness and helpfulness), when the image-based domain considered is familiar (i.e., MNIST) as opposed to unfamiliar (i.e., Kannada-MNIST). The wider implications of these new findings for XAI strategies are discussed.
23/02/2023: Xiaoqi Wang (Ohio State University, USA). GNNInterpreter: A Probabilistic Generative Model-Level Explanation for Graph Neural Networks.
Abstract: Recently, Graph Neural Networks (GNNs) have significantly advanced the performance of machine learning tasks on graphs. However, this technological breakthrough makes people wonder: how does a GNN make such decisions, and can we trust its prediction with high confidence? In this paper, we propose a model-agnostic model-level explanation method for different GNNs that follow the message passing scheme, GNNInterpreter, to explain the high-level decision-making process of the GNN model.
20/10/2022: 14h. Chaire MIA, Explainable and Responsible AI
20/10/2022: Amélie Gruel (PhD student, I3S Sophia Antipolis, Nice). Spiking neural networks for event-based vision.
Abstract: Amélie Gruel’s thesis, which is being carried out in the SPARKS team of the I3S laboratory (Laboratory of Computer Science, Signals and Systems of Sophia Antipolis), focuses on bio-inspired machine learning using spiking neural networks (SNNs), applied to event-based vision, using neuromorphic hardware, in order to develop low energy cost computer vision mechanisms. This thesis is carried out within the European CHIST-ERA project APROVIS3D (April 2020-September 2023). The objective of the APROVIS3D project is to design and implement machine learning methods based on impulse neural networks, in order to extract visual features and infer useful information from the visual scene using event-driven stereo cameras. Event-driven cameras (or silicon retinas) represent a new type of sensor that measures changes in brightness at each pixel and produces asynchronous events accordingly. This new technology makes it possible to record data evolving in time and space at a lower storage cost and lower energy consumption. Indeed, each event is recorded punctually and asynchronously, without redundancy, unlike traditional frame-based cameras, where each pixel produces values in all frames, synchronously. SNNs are artificial neural networks that mimic the dynamics of biological neural circuits by receiving and processing information in real time in the form of trains of “spikes. They are particularly well suited to process the atypical type of data produced by event-driven cameras, since each event can be assimilated to a spike between two neurons.
09/12/2021: Luis Gustavo Nonato, Universidade de São Paulo, Brazil. Analyzing and Comparing Feature Importance based Explainability Methods
Abstract: In recent years, many explanation methods have been proposed to reveal how black-box models reach specific decisions. Most of those methods rely on local feature importance attributions, making it difficult to generalize the understanding to the entire dataset. Moreover, different explanation methods generate outputs having different value ranges and dimensions. It thus becomes hard to compare the behavior and quality of those methods. In this talk, we will show how visual analytics and topological data analysis can be used to mitigate such difficulties, enabling a global overview of models and underlying data, while making it possible to compare and assess different explanation methods. We will also show two applications where explainability methods are employed to support the analysis of crime patterns and supreme court legal documents.
28/10/2021: Luca Bouroux (LaBRI). Multi Layered Features Explanation Method.
Abstract: Dans cet exposé nous présentons l'extension de la méthode d?explication des Réseaux CNNs « Feature Explanation Method ». La méthode explique la contribution des zones de l?image à classifier dans la décision du réseau entrainé. La contribution consiste à analyser les cartes des caractéristiques des différentes couches du réseau et fusionner explications de chaque couche dans un seul processus de retro-propagation.
Une méthodologie d?évaluation est proposée qui consiste à comparer les cartes d?importance des pixels obtenues avec les cartes d?attention visuelle enregistrées lors des expériences psychovisuelles– « guidée par la tâche de reconnaissance” (Base de données MexCulture)
– « observation libre » – Base Salicon. La méthode est comparée avec d’autre méthodes de l’état de l’art d’explication des décision des réseaux profonds entrainées.
07/10/2021: Damien Garreau (Université Côte d’Azur (Nice, France). A theoretical analysis of LIME.
Abstract: Since its release in 2016, LIME has emerged as one of the main model-agnostic explainability methods. It is implemented in software toolboxes used by industry practitioners and is also at the source of many extensions. But while LIME is used to explain complicated models, there are few guarantees that it makes sense even on the simplest ones. In this talk, I will present a first theoretical analysis of LIME, focusing on image data. We will see that, despite some satisfying properties, LIME has a few issues which can be problematic in practice.The main reference for this talk is Damien Garreau and Dina Mardaoui, What does LIME really see in images? ICML, 2021 (available at https://arxiv.org/abs/2102.06307)
17/06/2021: Alex Telea (Department of Information and Computing Science at Utrecht University). Visualizing the Black Box of Machine Learning: Challenges and Opportunities.
Abstract: Machine learning (ML) has witnessed tremendous successes in the last decade in classification, regression, and prediction tasks. However, many ML models are used, and sometimes even designed, as black boxes. When such models do not operate properly, their creators do not often know what is the best way to improve them. Moreover, even when operating successfully, users often require to understand how and why they take certain decisions to gain trust therein. We present how visualization and visual analytics helps towards explaining (and improving) ML models. These cover tasks such as understanding high-dimensional datasets; understanding unit specialization during the training of deep learning models; exploring how training samples determine the shape of classification decision boundaries; and helping users annotating samples in semi-supervised active learning scenarios.
04/03/2021: M. Oussalah, University of Oulu. Enforcing AI explainability Using Automatic Text Summarization
Abstract: In this talk, we will review our recent work on graph based text summarization as an innovative way to enforce explainability of the summarization task. This makes use of two well established text semantic representation techniques: Semantic Role Labelling (SRL) and Explicit Semantic Analysis (ESA) that exploits the constantly evolving collective human knowledge in Wikipedia. The essence of the developed framework is to construct a unique concept graph representation underpinned by semantic role-based multi-node (under sentence level) vertices for summarization. The developed approach has been tested using the standard publicly available dataset from Document Understanding Conference 2002 (DUC 2002), highlighting the performance with respect to state of the art using ROUGE-1 and ROUGE-2 metrics. Finally, some recommendations and discussion with respect to some existing explainable techniques will be stressed.
11/02/2021: Meghna Ayyar, LaBRI. From features to importance maps. The overview CNNs explanations methods.
Abstract: Explanations of decisions by CNNs in image classification tasks has become important in various image classification problems. Primarily, it is necessary to understand which regions are important, i.e., which pixels the network makes the decision in a given classification problem.
This holds true general-purpose images, but specifically for medical image analysis and classification the explainability has become even more important. It is important tu understand also how the results of explanations correlate.
Today, there exists a large variety of explanation methods. In this talk, we tray to propose a taxonomy of the families of them, we explain main principles and give illustrations of them. We will also position the Feature Understanding Method (FEM) developed in LaBRI wrt to these different families.
Joint work with Jenny Benois-Pineau and Akka Zemmari.
10/12/2020: Dragutin Petrovic, San Fransisco State University (USA). Toward more explainable Random Forest Classifiers.
Abstract: In this two part talk we will fist motivate the need for Explainable AI (XAI) in the broader context of AI Ethics and outline some issues and recommendations. In the second part of the talk we will cover our work on Explainable Random Forest Classifiers (RFEX) where we developed both Model and Sample explainers for this very powerful classical type of AI methods. We will demonstrate RFEX on a case study on the data from J. Craig Venter Institute in San Diego on classification of nerve cell clusters. Overall, our recommendation and focus of our research is to develop user centered XAI methods targeted for real users who are domain expert but often not AI experts.
19/11/2020: Meghyn Bienvenu (LaBRI). Explanation in the Context of Ontologies
Abstract: Ontologies are used to formalize the vocabulary and domain knowledge of a given application area, and various kinds of automated reasoning tasks are used both to aid in constructing and maintaining
ontologies and to exploit their contents. After quickly reviewing some basic notions about ontologies, I aim to provide a quick tour of the research on explanation in the context of ontologies. We will in particular discuss the problem of how to identify the parts of the ontology that are responsible for
a given inference.
22/10/2020: Véronique Ventos, Nukkai.ia. The game of bridge a killer application for next generation AI
Abstract: La première partie de l’exposé sera consacrée à une brève présentation de NukkAI (www.nukk.ai ) qui est un laboratoire privé d’Intelligence Artificielle dont l’objectif est de développer des algorithmes hybrides utilisant plusieurs paradigmes de l’Intelligence Artificielle (numérique/symbolique).Dans la seconde partie, nous présenterons le jeu de bridge, un des derniers jeux d’esprit où l’humain surpasse encore la machine. Le bridge est un terrain d’expérimentation rêvé pour développer une IA hybride car il s’agit d’un jeu à information incomplète, probabiliste, multi-agents, dont l’aspect collaboratif induit un impératif d’explicabilité.Enfin nous présenterons une partie des travaux réalisés ou en cours autour de ce challenge.
24/09/2020: Roger Roberts, Titan, Belgique. Intelligence artificielle : une contribution aux fondements sémantiques
Abstract: Sur les 30 dernières années les technologies informatiques ont fortement progressé dans deux domaines : – la capacité de produire d’important volumes de données via de multiples applications. Chaque application constitue un domaine propriétaire (un silo qui contrôle et gère ses propres ressources) et qui n’offre que peu de capacité de dialogue avec d’autres systèmes !
– la capacité de transporter ces données à bas prix via des réseaux comme l’Internet ainsi que l’interfaçage entre une application et l’utilisateur (IHM : Interface Homme Machine)
Ce qu’il faut dorénavant développer, ce sont des technologies favorisant la structuration explicite et l’échange de ces données entre des systèmes informatiques hétérogènes (cfr l’[interopérabilité] ! De nombreux efforts ont été déployés notamment par les institutions culturelles pour normaliser les métadonnées descriptives, visant à homogénéiser leur structure et à améliorer leur interopérabilité pour leur publication. Avec l’influence du Web sémantique, ces ensembles de métadonnées ont évolué vers des réseaux de connaissances en vue de bénéficier de la complémentarité des différents référentiels. Le stockage et la structuration données constituent un enjeu majeur pour faire progresser l’IA. L’adoption massive des services cloud et l’augmentation gigantesque du volume des données créées, stockées, analysées, nécessitent des systèmes de plus en plus performants pour prendre des décisions basées sur ces volumes colossaux de données. Pour les besoins de plusieurs conférences (ISKO à Barcelone en 2019 ou UER/EBU en juin) nous avons développé la sémantique et la sémiologie d’un espace commun aux humains et aux machines et qui est fort porteur pour diverses applications et notamment pour l’IA !
09/07/2020: Gaël Glorian, An introduction to Constraint Programming
Abstract: The field of constraint programming (CP) is one of the most efficient paradigms for solving many problems (of a combinatorial nature) in AI. This paradigm emerged in the 1970s and has been particularly effective in solving many families of difficult combinatorial problems. Combinatorial problems are found in many everyday situations (timetabling, routing, game resolution, etc.) and in many industrial applications (circuit checking, placement of relay antennas, etc.). CP has many applications in other research fields and industry.
This talk is an introduction to constraint programming, its usage and applications. We will talk about the two sides of constraint programming: modelling and solving. In particular, we will see that a good model is as important as a good solving tool.
02/07/2020: Guillaume Lagarde, Efficiently computing hierarchical clusterings
Abstract : A classic problem in unsupervised learning and data analysis is to
find simple and easy-to-visualize representations of the data that
still preserve its essential properties. Hierarchical clustering is
one such representation that is widely used and preserves the
underlying hierarchical structure of the data. The task of finding an
interesting hierarchical clustering can be formalized as the task of
finding an embedding of the data into an ultrametric space. The most
popular algorithms are the so-called “agglomerative” ones (single
linkage, average linkage, Ward’s method, etc.). However these methods exhibit a
prohibitive quadratic running time, making them impractical to handle
large inputs. In this talk I will present a new algorithm that runs
for Euclidean metric in time n^{1+1/c^2} for any constant c >= 1 and
outputs a 5c-approximation of the “best” ultrametric. We complement
this approach with some lower bounds and prove in particular that, in
some settings, there is no 3/2-approximation running in subquadratic
time. Finally I will present some empirical evaluation of the algorithm on
some classic machine learning datasets.Joint work with Vincent Cohen-Addad and Karthik C. S.
18/06/2020: Romain Giot(LaBRI), Sensing of Deep Neural Networks from Data Visualisation Perspective. and Kazi Ahmed Asif Fuad (LaBRI), Features Understanding in CNNs. Applications for Actions Recognition in Video and Image classification.
23/01/2020: Nathanaël Fijalkow, Program synthesis in the learning era Abstract: Programming by example is a type of program synthesis where the user gives a few pairs of input output and the goal is to find a program satisfying these pairs. I will present a machine learning line of attack for programming by example and discuss the underlying challenges and the approaches we propose, focussing on two aspects: data generation and search algorithms.Based on a paper published at AI&STATS2020 (https://arxiv.org/abs/1911.02624) with Judith Clymo (University of Leeds), Haik Manukian (University of California San Diego), Adria Gascon (Google), and Brooks Paige (UCL).
-->