This project aims to explore privacy preserving approaches applied to machine learning. The set of explored techniques encompasses homomorphic encryption, differential privacy, secure multi-party computation and trusted execution environments.
I’m interested in both the training and inference aspects with the aim to deploy machine learning models that could safely interact with sensitive data.
One of my current focus is the interaction between blockchains and federated learning to provide privacy-preserving audit trails and secure aggregation mechanisms.
Related publications:
- Haralampieva et al, PPMLP’ @CCS 2020
- Cai et al, AIChain 2020
- Passerat-Palmbach et al, arXiv preprint, 2019
- Ryffel et al., PPML workshop @NeurIPS 2018
This work was done in collaboration the members of the OpenMined project whose contributions to democratising Privacy Preserving Machine Learning is tremendous, researchers and students at Imperial College London, and the ConsenSys Health team.