The main seminar is organized each two weeks. It aims to gather people working on different aspects of Artificial Intelligence from theory to practice. This includes Machine and Deep Learning, Symbolic AI, Knowledge Compilation, Constraint Programming, Explainable AI, Optimization, …
Seminar on AI
- On Thursday, 11h-12h
- Contact: Akka Zemmari, Laurent Simon
Program:
- 01/06/2023 (room 76): Luc Pommé-Cassierou (LaBRI, BKB). H²O, a model and class-agnostic method to explain image classification predictions.
Abstract: In the field of eXplanable Artificial Intelligence (XAI), saliceny map approaches are very popular to explain the prediction for image classification tasks. In this presentation, a new hierarchical occlusion-based model and class-agnostic method is proposed to compute a saliency map that shows the importance of each pixel of an image in the prediction. A comparison to existing methods is provided with both visual examples, existing and new evaluation metrics to demonstrate the effectiveness of the approach. - 09/03/2023 : Courtney Ford (University College Dublin, Ireland). Explaining Classifications to Non-Experts: An XAI User Study of Post-Hoc Explanations for a Classifier When People Lack Expertise
Abstract: Very few eXplainable AI (XAI) studies consider how users’
understanding of explanations might change depending on whether they know more/less about the to-be-explained domain (i.e., whether they differ in their expertise). Yet, expertise is a critical facet of most high-stakes, human decision-making (e.g., understanding how a trainee doctor differs from an experienced consultant). Accordingly, this paper reports a novel, user study (N=96) on how people’s expertise in a domain affects their understanding of post-hoc explanations-by-example for a deep-learning, black-box classifier. The results show that people’s understanding of explanations for correct and incorrect classifications changes dramatically, on several dimensions (e.g., response times, perceptions of correctness and helpfulness), when the image-based domain considered is familiar (i.e., MNIST) as opposed to unfamiliar (i.e., Kannada-MNIST). The wider implications of these new findings for XAI strategies are discussed. - 23/02/2023: Xiaoqi Wang (Ohio State University, USA). GNNInterpreter: A Probabilistic Generative Model-Level Explanation for Graph Neural Networks.
Abstract: Recently, Graph Neural Networks (GNNs) have significantly advanced the performance of machine learning tasks on graphs. However, this technological breakthrough makes people wonder: how does a GNN make such decisions, and can we trust its prediction with high confidence? In this paper, we propose a model-agnostic model-level explanation method for different GNNs that follow the message passing scheme, GNNInterpreter, to explain the high-level decision-making process of the GNN model. - 20/10/2022: 14h. Chaire MIA, Explainable and Responsible AI
- 20/10/2022: Amélie Gruel (PhD student, I3S Sophia Antipolis, Nice). Spiking neural networks for event-based vision.
Abstract: Amélie Gruel’s thesis, which is being carried out in the SPARKS team of the I3S laboratory (Laboratory of Computer Science, Signals and Systems of Sophia Antipolis), focuses on bio-inspired machine learning using spiking neural networks (SNNs), applied to event-based vision, using neuromorphic hardware, in order to develop low energy cost computer vision mechanisms. This thesis is carried out within the European CHIST-ERA project APROVIS3D (April 2020-September 2023). The objective of the APROVIS3D project is to design and implement machine learning methods based on impulse neural networks, in order to extract visual features and infer useful information from the visual scene using event-driven stereo cameras. Event-driven cameras (or silicon retinas) represent a new type of sensor that measures changes in brightness at each pixel and produces asynchronous events accordingly. This new technology makes it possible to record data evolving in time and space at a lower storage cost and lower energy consumption. Indeed, each event is recorded punctually and asynchronously, without redundancy, unlike traditional frame-based cameras, where each pixel produces values in all frames, synchronously. SNNs are artificial neural networks that mimic the dynamics of biological neural circuits by receiving and processing information in real time in the form of trains of “spikes. They are particularly well suited to process the atypical type of data produced by event-driven cameras, since each event can be assimilated to a spike between two neurons. - 09/12/2021: Luis Gustavo Nonato, Universidade de São Paulo, Brazil. Analyzing and Comparing Feature Importance based Explainability Methods
Abstract: In recent years, many explanation methods have been proposed to reveal how black-box models reach specific decisions. Most of those methods rely on local feature importance attributions, making it difficult to generalize the understanding to the entire dataset. Moreover, different explanation methods generate outputs having different value ranges and dimensions. It thus becomes hard to compare the behavior and quality of those methods. In this talk, we will show how visual analytics and topological data analysis can be used to mitigate such difficulties, enabling a global overview of models and underlying data, while making it possible to compare and assess different explanation methods. We will also show two applications where explainability methods are employed to support the analysis of crime patterns and supreme court legal documents. (Short Bio) Luis Gustavo Nonato received the PhD degree in applied mathematics from the Pontifícia Universidade Católica do Rio de Janeiro – Brazil, in 1998. He is currently a professor in the Instituto de Ciências Matemáticas e de Computação, Universidade de São Paulo, São Carlos, Brazil. From 2016 to 2018 Nonato was a visiting professor at the Center for Data Science, New York University – USA and he was also a visiting scholar in the Scientific Computing and Imaging Institute, University of Utah, Salt Lake City – USA from 2008 to 2010. Besides having served in several program committees, including IEEE SciVis, IEEE InfoVis, and EuroVis, Nonato was associate editor of the Computer Graphics Forum journal and currently he is associate editor of the IEEE Transactions on Visualization and Computer Graphics. He is also editor-in-chief of the SBMAC SpringerBriefs in Applied Mathematics and Computational Sciences. Nonato’s main research interests include machine learning, data science, and visualization. Nonato has a strong interest in bridging the gap between academia, industry, and governments, leading a number of initiatives with the private sector and government agencies. - 28/10/2021: Luca Bouroux (LaBRI). Multi Layered Features Explanation Method.Abstract: Dans cet exposé nous présentons l?extension de la méthode d?explication des Réseaux CNNs « Feature Explanation Method ».
La méthode explique la contribution des zones de l?image à classifier dans la décision du réseau entrainé.La contribution consiste à analyser les cartes des caractéristiques des différentes couches du réseau et fusionner explications de chaque couche dans un seul processus de retro-propagation.Une méthodologie d?évaluation est proposée qui consiste à comparer les cartes d?importance des pixels obtenues avec les cartes d?attention visuelle enregistrées lors des expériences psychovisuelles– « guidée par la tâche de reconnaissance” (Base de données MexCulture)– « observation libre » – Base Salicon.La méthode est comparée avec d’autre méthodes de l’état de l’art d’explication des décision des réseaux profonds entrainées.
- 07/10/2021: Damien Garreau (Université Côte d’Azur (Nice, France). A theoretical analysis of LIME.Abstract: Since its release in 2016, LIME has emerged as one of the main model-agnostic explainability methods. It is implemented in software toolboxes used by industry practitioners and is also at the source of many extensions. But while LIME is used to explain complicated models, there are few guarantees that it makes sense even on the simplest ones. In this talk, I will present a first theoretical analysis of LIME, focusing on image data. We will see that, despite some satisfying properties, LIME has a few issues which can be problematic in practice.The main reference for this talk is Damien Garreau and Dina Mardaoui, What does LIME really see in images? ICML, 2021 (available at https://arxiv.org/abs/2102.06307)(Short) Bio: Damien Garreau is an assistant professor in Université Côte d’Azur (Nice, France) and a member of the Maasai Inria team, located in Sophia-Antipolis. He leads NIM-ML, a project dedicated to new interpretability methods funded by ANR. Before coming to Nice, he was a postdoctoral researcher in the Max Planck Institute for Intelligent Systems (Tübingen, Germany) working with Pr. Ulrike von Luxburg. He did his PhD in Inria Paris under the direction of Sylvain Arlot and Gérard Biau.
- 17/06/2021: Alex Telea (Department of Information and Computing Science at Utrecht University). Visualizing the Black Box of Machine Learning: Challenges and Opportunities.
Abstract: Machine learning (ML) has witnessed tremendous successes in the last decade in classification, regression, and prediction tasks. However, many ML models are used, and sometimes even designed, as black boxes. When such models do not operate properly, their creators do not often know what is the best way to improve them. Moreover, even when operating successfully, users often require to understand how and why they take certain decisions to gain trust therein. We present how visualization and visual analytics helps towards explaining (and improving) ML models. These cover tasks such as understanding high-dimensional datasets; understanding unit specialization during the training of deep learning models; exploring how training samples determine the shape of classification decision boundaries; and helping users annotating samples in semi-supervised active learning scenarios.(Short) Bio: Alex Telea works as a full professor in Visual Data Analytics at the Department of Information and Computing Science at Utrecht University, where he leads the Visual Data Analytics Group.His research focuses on the creation of interactive techniques to visually depict, explore, and explain large amounts of complex, time-dependent, and heterogeneous data. Specific topics of interest are: visualization of relational data, visualization for explaining artificial intelligence methods, visualization of large software systems, and visual exploration of high-dimensional data collections. He applies the results of his research to various application domains, including geoinformation systems, medical imaging, software maintenance, and 3D shape processing, together with stakeholders from both academia and the IT industry.As a teacher he is involved in courses in scientific and information visualization, visual analytics, software visualization, and multimedia retrieval. He is the author of “Data Visualization – Principles and Practice” (CRC Press, 2nd edition, 2014), one of the most used textbooks in teaching visualization to students and practitioners worldwide.- 04/03/2021: M. Oussalah, University of Oulu. Enforcing AI explainability Using Automatic Text Summarization
Abstract: In this talk, we will review our recent work on graph based text summarization as an innovative way to enforce explainability of the summarization task. This makes use of two well established text semantic representation techniques: Semantic Role Labelling (SRL) and Explicit Semantic Analysis (ESA) that exploits the constantly evolving collective human knowledge in Wikipedia. The essence of the developed framework is to construct a unique concept graph representation underpinned by semantic role-based multi-node (under sentence level) vertices for summarization. The developed approach has been tested using the standard publicly available dataset from Document Understanding Conference 2002 (DUC 2002), highlighting the performance with respect to state of the art using ROUGE-1 and ROUGE-2 metrics. Finally, some recommendations and discussion with respect to some existing explainable techniques will be stressed.(Short) Bio: Dr. Mourad Oussalah is a recently appointed Research Professor in University of Oulu, Faculty of Information Technology and Electrical Engineering, Centre for Machine Vision and Signal Analysis, where he leads the Social Mining Research Group. He is also affiliated with Medical Imaging, Physics and Technology Unit of the Faculty of Medicine as part of Academy of Finland DigiHealth Project. Prior joining University of Oulu, he was with the University of Birmingham, UK from 2003-2016. He also held research positions at City University of London and KU Leuven in Belgium, and Visiting Professor position in University of Evry Val Essonnes of France (summer 2006), New Mexico of USA (summer 2009) and Xian University of China (Fall 2018).
- 04/03/2021: M. Oussalah, University of Oulu. Enforcing AI explainability Using Automatic Text Summarization
- 11/02/2021: Meghna Ayyar, LaBRI. From features to importance maps. The overview CNNs explanations methods Abstract: Explanations of decisions by CNNs in image classification tasks has become important in various image classification problems. Primarily, it is necessary to understand which regions are important, i.e., which pixels the network makes the decision in a given classification problem.
This holds true general-purpose images, but specifically for medical image analysis and classification the explainability has become even more important. It is important tu understand also how the results of explanations correlate.
Today, there exists a large variety of explanation methods. In this talk, we tray to propose a taxonomy of the families of them, we explain main principles and give illustrations of them. We will also position the Feature Understanding Method (FEM) developed in LaBRI wrt to these different families.
Joint work with Jenny Benois-Pineau and Akka Zemmari. - 10/12/2020: Dragutin Petrovic, San Fransisco State University (USA). Toward more explainable Random Forest Classifiers. Abstract: In this two part talk we will fist motivate the need for Explainable AI (XAI) in the broader context of AI Ethics and outline some issues and recommendations. In the second part of the talk we will cover our work on Explainable Random Forest Classifiers (RFEX) where we developed both Model and Sample explainers for this very powerful classical type of AI methods. We will demonstrate RFEX on a case study on the data from J. Craig Venter Institute in San Diego on classification of nerve cell clusters. Overall, our recommendation and focus of our research is to develop user centered XAI methods targeted for real users who are domain expert but often not AI experts.Bio:Prof. D. Petkovic obtained his Ph.D. at UC Irvine, in the area of biomedical image processing. He spent over 15 years at IBM Almaden Research Center as a scientist and in various management roles. His contributions ranged from use of computer vision for inspection, to multimedia and content management systems. He is the founder of IBM’s well-known QBIC (query by image content) project, which significantly influenced the content-based retrieval field. Dr. Petkovic received numerous IBM awards for his work and became an IEEE Fellow in 1998 and IEEE LIFE Fellow in 2018 for leadership in content-based retrieval area. Dr. Petkovic also had various technical management roles in Silicon Valley startups. In 2003 Dr. Petkovic joined CS Department as a Chair and also founded SFSU Center for Computing for Life Sciences in 2005. Currently, Dr. Petkovic is the Associate Chair of the SFSU Department of Computer Science and Director of the Center for Computing for Life Sciences. He led the establishment of SFSU Graduate Certificate in AI Ethics, jointly with SFSU Schools of Business and Philosophy. Research and teaching interests of Prof. Petkovic include Machine Learning with emphasis on Explainability and Ethics, teaching methods for Global SW Engineering and engineering teamwork, and the design and development of easy to use systems.
- 19/11/2020: Meghyn Bienvenu (LaBRI). Explanation in the Context of Ontologies Abstract: Ontologies are used to formalize the vocabulary and domain knowledge of a given application area, and various kinds of automated reasoning tasks are used both to aid in constructing and maintaining
ontologies and to exploit their contents. After quickly reviewing some basic notions about ontologies, I aim to provide a quick tour of the research on explanation in the context of ontologies. We will in particular discuss the problem of how to identify the parts of the ontology that are responsible fora given inference.
- 22/10/2020: Véronique Ventos, Nukkai.ia. The game of bridge a killer application for next generation AI Abstract: La première partie de l’exposé sera consacrée à une brève présentation de NukkAI (www.nukk.ai ) qui est un laboratoire privé d’Intelligence Artificielle dont l’objectif est de développer des algorithmes hybrides utilisant plusieurs paradigmes de l’Intelligence Artificielle (numérique/symbolique).Dans la seconde partie, nous présenterons le jeu de bridge, un des derniers jeux d’esprit où l’humain surpasse encore la machine. Le bridge est un terrain d’expérimentation rêvé pour développer une IA hybride car il s’agit d’un jeu à information incomplète, probabiliste, multi-agents, dont l’aspect collaboratif induit un impératif d’explicabilité.Enfin nous présenterons une partie des travaux réalisés ou en cours autour de ce challenge.
- 24/09/2020: Roger Roberts, Titan, Belgique. Intelligence artificielle : une contribution aux fondements sémantiques
Abstract: Sur les 30 dernières années les technologies informatiques ont fortement progressé dans deux domaines : – la capacité de produire d’important volumes de données via de multiples applications. Chaque application constitue un domaine propriétaire (un silo qui contrôle et gère ses propres ressources) et qui n’offre que peu de capacité de dialogue avec d’autres systèmes !
– la capacité de transporter ces données à bas prix via des réseaux comme l’Internet ainsi que l’interfaçage entre une application et l’utilisateur (IHM : Interface Homme Machine)
Ce qu’il faut dorénavant développer, ce sont des technologies favorisant la structuration explicite et l’échange de ces données entre des systèmes informatiques hétérogènes (cfr l’[interopérabilité] ! De nombreux efforts ont été déployés notamment par les institutions culturelles pour normaliser les métadonnées descriptives, visant à homogénéiser leur structure et à améliorer leur interopérabilité pour leur publication. Avec l’influence du Web sémantique, ces ensembles de métadonnées ont évolué vers des réseaux de connaissances en vue de bénéficier de la complémentarité des différents référentiels. Le stockage et la structuration données constituent un enjeu majeur pour faire progresser l’IA. L’adoption massive des services cloud et l’augmentation gigantesque du volume des données créées, stockées, analysées, nécessitent des systèmes de plus en plus performants pour prendre des décisions basées sur ces volumes colossaux de données. Pour les besoins de plusieurs conférences (ISKO à Barcelone en 2019 ou UER/EBU en juin) nous avons développé la sémantique et la sémiologie d’un espace commun aux humains et aux machines et qui est fort porteur pour diverses applications et notamment pour l’IA !Bio: Né à Welkenraedt (Belgique) le 9 avril 1950, sur une frontière historique ! Du point vue métier, j’ai été un artisan de l’audiovisuel. J’ai commencé ma carrière en 1975 au sein de différentes entreprises privées avant de rejoindre la RTBF en tant que réalisateur en 1984 (Département de l’Information puis des Sports). En 1992 j’ai accédé au poste de Chef de Réalisation (le patron des réalisateurs), avant de prendre en 2003 le poste de responsable des Moyens Culturels Communs (avec en plus les scriptes, la régie finale, le graphisme et l’archivage). En 1994, j’ai été élu président de l’asbl TITAN (Televisual Interactive Terminal ad Associated Networks) une association fondée en vue de faciliter la transition numérique des technologies de l’information et de la communication (voir www.titan.be). Ce «Think-Tank» coordonne des activités, projets, et veille technologique sur les sujets de l’échange ouvert de contenus dans des univers hétérogènes ! Nous sommes également des experts dans le domaine de la préservation numérique à long terme (promoteur de la norme OAIS de l’ISO) ! Nous avons finalisé une cosmogonie informatique (AXIS-CSRM) afin de gérer l’exportation de données sémantiquement structurées entre des univers hétérogènes … en clair comment gérer les relations entre les objets du monde réel, leurs représentations et leurs significations. Retraité en avril 2015, je suis devenu sur la base de mon CV et de mon bagage linguistique (FR/NL/D/UK) président du Comité belge Mémoire du Monde (Memory of the World). - 09/07/2020: Gaël Glorian, An introduction to Constraint Programming
Abstract: The field of constraint programming (CP) is one of the most efficient paradigms for solving many problems (of a combinatorial nature) in AI. This paradigm emerged in the 1970s and has been particularly effective in solving many families of difficult combinatorial problems. Combinatorial problems are found in many everyday situations (timetabling, routing, game resolution, etc.) and in many industrial applications (circuit checking, placement of relay antennas, etc.). CP has many applications in other research fields and industry.
This talk is an introduction to constraint programming, its usage and applications. We will talk about the two sides of constraint programming: modelling and solving. In particular, we will see that a good model is as important as a good solving tool. - 02/07/2020: Guillaume Lagarde, Efficiently computing hierarchical clusterings
Abstract : A classic problem in unsupervised learning and data analysis is to
find simple and easy-to-visualize representations of the data that
still preserve its essential properties. Hierarchical clustering is
one such representation that is widely used and preserves the
underlying hierarchical structure of the data. The task of finding an
interesting hierarchical clustering can be formalized as the task of
finding an embedding of the data into an ultrametric space. The most
popular algorithms are the so-called “agglomerative” ones (single
linkage, average linkage, Ward’s method, etc.). However these methods exhibit a
prohibitive quadratic running time, making them impractical to handle
large inputs. In this talk I will present a new algorithm that runs
for Euclidean metric in time n^{1+1/c^2} for any constant c >= 1 and
outputs a 5c-approximation of the “best” ultrametric. We complement
this approach with some lower bounds and prove in particular that, in
some settings, there is no 3/2-approximation running in subquadratic
time. Finally I will present some empirical evaluation of the algorithm on
some classic machine learning datasets.Joint work with Vincent Cohen-Addad and Karthik C. S. - 18/06/2020: Romain Giot, Sensing of Deep Neural Networks from Data Visualisation Perspective.
and Kazi Ahmed Asif Fuad, Features Understanding in CNNs. Applications for Actions Recognition in Video and Image classification.
- 23/01/2020: Nathanaël Fijalkow, Program synthesis in the learning era
Abstract: Programming by example is a type of program synthesis where the user gives a few pairs of input output and the goal is to find a program satisfying these pairs. I will present a machine learning line of attack for programming by example and discuss the underlying challenges and the approaches we propose, focussing on two aspects: data generation and search algorithms.Based on a paper published at AI&STATS2020 (https://arxiv.org/abs/1911.02624) with Judith Clymo (University of Leeds), Haik Manukian (University of California San Diego), Adria Gascon (Google), and Brooks Paige (UCL).
Other working groups:
Working Group on Deep Learning
- On Mondays, 11h-12h,
- Contact: Akka Zemmari
- Web site:
BigData Seminar
- Thursday, 11h-12,
- Contact: Nicolas Hanusse
- Web site
Reading group on theory of machine learning
- Contact: Nathanaël Fijalkow
- Web site