Catégorie
Other seminars

[JOINT SEMINAR] MICS/CVN | Srinivasan Parthasarathy & Thomas Fel

Bandeau image
[JOINT SEMINAR] MICS/CVN
Date de tri
Lieu de l'événement
CentraleSupélec, sc.046 Peugeot (Bouygues build), Gif-sur-Yvette

Share

twlkml
Chapo
Join us on April 11th at CentraleSupélec for a seminar organized by the Digital Vision Center (CentraleSupélec, INRIA) and the MICS laboratory!
Contenu
Ancre
2pm : Srinivasan Parthasarathy
Corps de texte

Towards Democratizing AI: Scaling and Learning (Fair) Graph Representations in an Implementation Agnostic Fashion

Abstract

The design of graph integration methods has recently attracted renewed interest. Few, if any, can scale to a large graph with millions of nodes due to computational complexity and memory requirements. In this talk, I will present an approach to address this limitation by introducing the MultI-Level Embedding (MILE) framework - a generic methodology enabling contemporary graph embedding methods to scale to large graphs. MILE repeatedly reduces the graph into smaller ones using a hybrid matching technique to maintain the basic graph structure. It then applies existing integration methods to the coarser graph and refines the integrations to the original graph via a graph convolution neural network that it learns. Time permitting, I will then describe one of MILE's many natural extensions - in a distributed setting (DistMILE) to further improve the scalability of graph integration or mechanisms - to learn fair representations of graphs (FairMILE).

The proposed MILE framework and its variants (DistMILE, FairMILE) do not depend on the underlying graph integration techniques and can be applied to many existing graph integration methods without modifying them and regardless of their implementation language. Experimental results on five large-scale datasets demonstrate that MILE significantly increases the speed (order of magnitude) of graph integration while generating higher quality integrations, for the task of node classification. MILE can comfortably cope with a graph of 9 million nodes and 40 million edges, where existing methods either run out of memory or take too long to compute on a modern workstation. Our experiments demonstrate that DistMILE learns representations of similar quality to other baselines, while reducing embedding learning times (up to 40 times faster than MILE). FairMILE also learns fair data representations while reducing embedding learning time.

Joint work with Jionqian Liang (Google Brain), S. Gurukar (OSU) and Yuntian He (OSU).

Biography

Srinivasan Parthasarathy is Professor of Computer Science and Engineering and Director of the Data Mining Research Laboratory at Ohio State. His research focuses on data analysis, databases and high-performance computing. He is one of a handful of researchers nationwide to have received career grants from the Department of Energy and the National Science Foundation. He and his students have received sixteen best paper awards or "best of" nominations from leading forums in the field, including SIAM Data Mining, ACM SIG and ACM SIG: SIAM Data Mining, ACM SIGKDD, VLDB, ISMB, WWW, WSDM, HiPC, ICDM and ACM Bioinformatics. He chaired the SIAM Data Mining conference steering committee (elected) from 2012 to 2019, and has served on the boards of several journals on parallel computing, machine learning and data mining. Since 2012, he has also contributed to the creation of OSU's first nationwide (USA) undergraduate data analytics major, of which he is a founding director.

Ancre
3pm : Thomas Fel
Corps de texte

Sparks of Interpretability, Recent Advancements in Explaining Large Vision Models

Abstract

I will show where we currently stand and what can be done with recent methods of Explainable AI applied to vision models. In particular, I will showcase three recent explainability techniques applied to vision models. We will start with attribution methods before focusing on recent advancements in concept-based methods and feature visualization. Towards the end, we will explore how these three methods have more in common than one might think, demonstrating potential synergies between them while attempting to explain the main strategies of a large vision model on ImageNet.

Biography

Thomas Fel, a 3rd year French PhD student, is working on the Explainable AI advised by Thomas Serre at ANITI and Brown University. He is also part of the DEEL team, working on making AI systems certifiable. In September 2024, he will officially join the Kempner Institute at Harvard as a researcher. Deeply fascinated by the explainability (XAI) of large vision models, using an interdisciplinary approach that blends computational science, mathematics and neuroscience principles. In the long term, his aim is to harness the knowledge gained from XAI research to further our understanding of human intelligence. He is also passionate about contributing to open source, particularly as the creator of Xplique.

Ancre
Pratical information
Corps de texte

Teams link

ID team : 312 072 665 309

Access code : StxHMe