Privacy-preserving AI in Healthcare


Date
Location
online webinar
Links

The webinar and live discussion will provide an overview of techniques for secure, federated and privacy-preserving machine learning with a focus on healthcare and medical imaging.

The widespread clinical deployment of artificial intelligence algorithms will require large-scale patient datasets for training and validation. However, patient privacy concerns, alongside legal and ethical requirements mandate the protection of personally identifiable health data. At the same time, algorithm creators and owners wish to protect their models against unwarranted exploitation or theft.

To reconcile the unmet needs for privacy and asset protection while enabling training of clinically useful machine learning models on large datasets, secure and privacy preserving machine learning techniques are being developed.

The webinar will introduce viewers to these methods including federated learning, differential privacy, homomorphic encryption, secure multi-party computation, on-device privacy and distributed computation approaches with cryptographic security guarantees, and provide insight into their technical challenges and limitations.

The webinar is suited for a broad audience including healthcare professionals, machine learning researchers and practitioners and researchers of associated disciplines like data ethics or systems security. A prior understanding of machine learning is desirable, but no knowledge of cryptography is required. The speakers are experts in the field of healthcare and radiology, machine learning, secure and private artificial intelligence and blockchain systems. Viewers will have the opportunity to ask questions beforehand and during the open panel discussion at the end of the webinar.