Catégorie
Le Séminaire Palaisien

« Le Séminaire Palaisien » | Alexandre Gramfort et Marine Le Morvan sur l'apprentissage automatique et la statistique

Bandeau image
Date de tri
Lieu de l'événement
URL : https://bluejeans.com/9352872428/9913

Partager

twlkml
Chapo
Le séminaire Palaisien réunit, chaque premier mardi du mois, la vaste communauté de recherche de Saclay autour de la statistique et de l'apprentissage automatique.
Contenu
Corps de texte

Chaque session du séminaire est divisée en deux présentations scientifiques de 40 minutes chacune : 30 minutes d’exposé et 10 minutes de questions. Les séminaires se déroulent maintenant en format virtuel.

Alexandre Gramfort (Inria) et Marine Le Morvan (Inria), animeront la session du mois de novembre.

Nom de l'accordéon
« LEARNING REPRESENTATION FROM NEURAL SIGNALS » - ALEXANDRE GRAMFORT
Texte dans l'accordéon

Good representations are the building blocks of signal processing. While the Fourier representation reveal sinusoids buried in noise, wavelets better capture time-localized transient phenomena. Processing neuroimaging data is also based on representations. Morlet wavelets or short time Fourier transform (STFT) are common for electrophysiology (M/EEG) and spherical harmonics are used for diffusion MRI signals. In this talk I will cover some recent works that aim to learn non-linear representations of neural time series in order to estimate good predictive models. Works that I will present will cover recent works such as [1, 2, 3]. 

[1] Learning the Morphology of Brain Signals Using Alpha-Stable Convolutional Sparse Coding 
Jas, M., Dupré la Tour, T., Simsekli, U. and Gramfort, A. (2017) 
Advances in Neural Information Processing Systems (NIPS) 30 
[2] Multivariate Convolutional Sparse Coding for Electromagnetic Brain Signals 
Dupré la Tour, T., Moreau, T., Jas, M. and Gramfort, A. (2018) 
Advances in Neural Information Processing Systems (NeurIPS) 
[3] Self-supervised representation learning from electroencephalography signals 
Banville, H., Albuquerque, I., Moffat, G., Engemann, D. and Gramfort, A. (2019) 
Proc. Machine Learning for Signal Processing (MLSP).

Nom de l'accordéon
« NEUMANN NETWORKS: DIFFERENTIAL PROGRAMMING FOR SUPERVISED LEARNING WITH MISSING VALUES » - MARINE LE MORVAN
Texte dans l'accordéon

The presence of missing values makes supervised learning much more challenging. Indeed, previous work has shown that even when the response is a linear function of the complete data, the optimal predictor is a complex function of the observed entries and the missingness indicator. As a result, the computational or sample complexities of consistent approaches depend on the number of missing patterns, which can be exponential in the number of dimensions. In this work, we derive the analytical form of the optimal predictor under a linearity assumption and various missing data mechanisms including Missing at Random (MAR) and self-masking (Missing Not At Random). Based on a Neumann series approximation of the optimal predictor, we propose a new principled architecture, named Neumann networks. Their originality and strength comes from the use of a new type of non-linearity: the multiplication by the missingness indicator. We provide an upper bound on the Bayes risk of Neumann networks, and show that they have good predictive accuracy with both a number of parameters and a computational complexity independent of the number of missing data patterns. As a result they scale well to problems with many features, and remain statistically efficient for medium-sized samples. Moreover, we show that, contrary to procedures using EM or imputation, they are robust to the missing data mechanism, including difficult MNAR settings such as self-masking.