Bandeau image
Information Leakage in Deep Learning
Crédits bandeau
rawpixel.com / Freepik

The «Information Leakage in Deep Learning» project

The «Information Leakage in Deep Learning» project

  • The project
  • Contacts
Chapo
The analysis of state-of-the art machine learning models to uncover possible attacks and quantify information leakage.
Contenu
Ancre
The project
Corps de texte

Machine learning is a powerful way of acquiring knowledge from training data and identifying relationships between variables that allow predicting future outcomes. Deep learning, in particular, has turned out to be very capable at discovering structures in high-dimensional data.  
In many real-world applications, the training data include potentially sensitive information which need to be kept confidential. However, once trained, the software is typically made available to third parties, either directly, by selling the software itself, or indirectly, by allowing it to be queried. This access can be used to extract sensitive information about the training data, which is still present, although hidden in the parameters determining the trained model. This raises the fundamental question about how much information an attacker can extracted from trained software. 

 

The overall project goal is to develop a fundamental understanding with experimental validation of the information-leakage of training data from deep learning systems.

 

The project is directed by Pablo Piantanida, professor at CentraleSupélec, and Catuscia Palamidessi, research director at Inria. With the collaboration of Georg Pichler (post-doc TU Wien), Marco Romanelli and Ganesh del Grosso (PhD students at Inria), they aim to:

  • Analyze in depth the state-of-the-art attacks to privacy in learning systems. In particular, model inversion attacks, attribute inference attacks, and membership inference attacks.
  • Based on the uncovered attacks, develop appropriate measures to quantify the amount of sensitive information which can be retrieved from the trained software. The resulting leakage measures will serve as a basis for the formal analysis of attacks and for the development of robust mitigation techniques.
  • Explore strategies to reduce the privacy threats and minimize the potential information leakage of a trained model, while trying to preserve its utility as much as possible. Suitable training strategies as well as appropriate criteria for the architecture will be explored.

«We propose an analysis of state-of-the art machine learning models to uncover possible attacks and quantify information leakage. We will leverage recent results that are at the forefront of current research, where no standard tools and techniques are available yet. We aim both at developing these tools and at using them for analyzing white and black-box threat models.» concludes Pablo.

Ancre
Contacts