Privacy Preserving Machine Learning

This project aims to explore privacy preserving approaches applied to machine learning. The set of explored techniques encompasses homomorphic encryption, differential privacy, secure multi-party computation and trusted execution environments.

I’m interested in both the training and inference aspects with the aim to deploy machine learning models that could safely interact with sensitive data.

One of my current focus is the interaction between blockchains and federated learning to provide privacy-preserving audit trails and secure aggregation mechanisms.

Related publications:

This work was done in collaboration the members of the OpenMined project whose contributions to democratising Privacy Preserving Machine Learning is tremendous, researchers and students at Imperial College London, and the ConsenSys Health team.