This project aims to explore privacy preserving approaches applied to machine learning. The set of explored techniques encompasses homomorphic encryption, differential privacy, secure multi-party computation and trusted execution environments.
I’m interested in both the training and inference aspects with the aim to deploy machine learning models that could safely interact with sensitive data.
Related publications:
- Ganescu et al, PPAI 2023
- Usynin et al, 2022
- Pereteanu, et al, 2022
- Usynin et al, 2021
- Kaissis et al, 2021
- Haralampieva et al, PPMLP’ @CCS 2020
- Ryffel et al., PPML workshop @NeurIPS 2018
This work was done in collaboration the members of the OpenMined project whose contributions to democratising Privacy Preserving Machine Learning is tremendous, researchers and students at Imperial College London and City University London.