Ever since the initial explorations of Federated Learning with the early design of the PySyft toolkit, I have kept on diving into various aspects of Federated and Collaborative Learning.
Two problems in particular have caught my interest: 1) the quantitative evaluation of contributions to a federated model, 2) the aspects of the training environment impacting the quality of the federated learning process. Our recent paper ARIA explores the interaction between Architectures, Aggregation methods and Initialisations to the success of Federated Learning in a medical imaging task.
Related publications:
- Siomos et al, 2023
- Siomos et al, 2021
- Cai et al, AIChain 2020
- Ryffel et al., PPML workshop @NeurIPS 2018
This work was done in collaboration the members of the OpenMined project whose contributions to democratising Privacy Preserving Machine Learning is tremendous, researchers and students at Imperial College London and City University London.
Logo borrowed from https://federated.withgoogle.com