Save the Date - DATAIA Seminar with Aapo Hyvärinen - 30th september
Unsupervised learning, in particular learning general nonlinear representations, is one of the deepest problems in machine learning. Estimating latent quantities in a generative model provides a principled framework, and has been successfully used in the linear case, e.g. with independent component analysis (ICA) and sparse coding. However, extending ICA to the nonlinear case has proven to be extremely difficult: A straight-forward extension is unidentifiable, i.e. it is not possible to recover those latent components that actually generated the data. Here, we show that this problem can be solved by using additional information either in the form of temporal structure or an additional, auxiliary variable. As a first approach, we formulate self-supervised learning schemes which are similar to those heuristically proposed in computer vision. Our main contribution is to provide a rigorous theoretical framework for such self-supervised algorithms, proving that they are able to solve the nonlinear ICA problem. We further show how a connection between nonlinear ICA and variational autoencoders (VAE): While ordinary VAE suffers from the lack of identifiability, conditioning by auxiliary variables leads to identifiability and provides another method for learning nonlinear ICA.
Registration free but mandatory within the limit of available seats
For security reasons, no access to the conference room for unregistered participants