Privacy Preserving Machine Learning

This project aims to explore privacy preserving approaches applied to machine learning. The set of explored techniques encompasses homomorphic encryption, differential privacy, secure multi-party computation and trusted execution environments.

I’m interested in both the training and inference aspects with the aim to deploy machine learning models that could safely interact with sensitive data.

Related publications:

  • Ryffel et al., PPML workshop @NeurIPS 2018

This work was done in collaboration with Theo Ryffel and the members of the OpenMined project whose contributions to democratising Privacy Preserving Machine Learning is tremendous.