Federated learning (FL) is increasingly deployed among multiple clients to train a shared model over decentralized data. To address the privacy concerns, FL systems need to protect the clients' data from being revealed during training, and also control data leakage through trained models when exposed to untrusted domains. However, existing FL systems (with distributed differential privacy) work impractically in the presence of client dropout, resulting in either poor privacy guarantees or degraded training accuracy. In addition, existing FL systems focus on safeguarding the privacy of training data, but not on protecting the confidentiality of the models being trained, which are increasingly of high business value. In this talk, I will present two pieces of our recent work that aim to address these aforementioned issues.
Ruichuan is a Distinguished Member of Technical Staff and a Tech Lead at Nokia Bell Labs. Previously, he was a postdoctoral researcher at the Max Planck Institute for Software Systems. He received his Ph.D. in Computer Science from Peking University. Ruichuan’s current research centers around cloud computing, machine learning systems, decentralized systems, and privacy-preserving technologies.