« Information Leakage in Deep Learning » project

Machine learning is a powerful way of acquiring knowledge from data, and identifying relationships between variables to predict future results.

 

The overall aim of the project is to develop a fundamental understanding with experimental validation of the information leakage of learning data from deep learning networks.

Deep learning, in particular, has shown itself to be capable of discovering structures in complex data. In many real-world applications, the data used for learning includes potentially sensitive information, which must remain confidential. However, once learning is complete, the software is usually made available to a third party, either directly or indirectly, by allowing it to be interrogated. This access can be used to extract sensitive information about the training data, which is always present, albeit hidden in the parameters determining the model processed. This raises the fundamental question of how much information an attacker can extract from a neural network. The project is led by Pablo Piantanida, professor at CentraleSupélec, and Catuscia Palamidessi, research director at Inria. With the collaboration of Georg Pichler (post-doc TU Wien), Marco Romanelli and Ganesh del Grosso (PhD students at Inria), they aim to :

  • Analyze attacks on privacy in learning systems. In particular, “model inversion” attacks, “attribute inference” attacks and “membership inference” attacks;

  • Based on the attacks considered, develop appropriate measures to quantify the amount of sensitive information that can be extracted from a neural network. The resulting measures of information leakage will form the basis for formal analysis of attacks and the development of robust protection techniques;

  • Explore strategies for reducing threats to privacy and minimizing the potential leakage of information from a neural network while preserving its usefulness as far as possible. Appropriate training strategies and architecture criteria will also be explored.

We propose an analysis of machine learning models to detect possible attacks and quantify information leakage. We will use recent results on deep learning attacks, for which no standard tools or techniques are yet available. Our aim is both to develop these tools and to use them to analyze white-box and black-box threat models”, explains Pablo.


ContactsPablo Piantanida | Catuscia Palamidessi