Privacy Preserving Machine Learning

This project aims to explore privacy preserving approaches applied to machine learning. The set of explored techniques encompasses homomorphic encryption, differential privacy, secure multi-party computation and trusted execution environments.

I’m interested in both the training and inference aspects with the aim to deploy machine learning models that could safely interact with sensitive data.

Related publications:

This work was done in collaboration the members of the OpenMined project whose contributions to democratising Privacy Preserving Machine Learning is tremendous, researchers and students at Imperial College London and City University London.