Differential Privacy for Modern Deep Learning Models

Event Start
Event End
Location
B9, L2, R2325

Abstract

To protect privacy of training data for deep learning models, one line of work proposes to use Differential Privacy (DP). Over recent years, a substantial body of research has emerged, proposing a diverse array of differentially private training algorithms tailored to various deep learning models. However, the focus of previous work has primarily revolved around DNNs, with limited attention dedicated to more advanced deep learning models, such as Graph Neural Networks (GNNs) and Federated Learning (FL). In this talk, I will introduce our recent works on node-level differentially private GNNs and private and Byzantine-resilient Federated Learning algorithms. 

Brief Biography

Di Wang is currently an Assistant Professor of Computer Science and Adjunct Professor of Statistics at the King Abdullah University of Science and Technology (KAUST). Before that, he got his PhD degree in the Computer Science and Engineering at the State University of New York (SUNY) at Buffalo. And he obtained his BS and MS degrees in mathematics from Shandong University and the University of Western Ontario, respectively. His research areas include privacy-preserving machine learning, interpretability, machine learning theory, and trustworthy machine learning.

Contact Person