Recent research showed that most of the existing machine learning algorithms are vulnerable to various privacy attacks. An effective way for defending these attacks is to enforce differential privacy during the learning process. As a rigorous scheme for privacy preserving, Differential Privacy (DP) has now become a standard for private data analysis. Despite its rapid development in theory, DP's adoption to the machine learning community remains slow due to various challenges from the data, the privacy models and the learning tasks. In this talk, I will give a brief introduction on DP and use the Empirical Risk Minimization (ERM) problem as an example and show how to overcome these challenges in DP model. Particularly, I will first talk about how to overcome the high dimensionality challenge from the data for Sparse Linear Regression in the local DP (LDP) model. Then, I will discuss the challenge from the non-interactive LDP model and show a series of results to reduce the exponential sample complexity of ERM. Next, I will present techniques on achieving DP for ERM with non-convex loss functions. Finally, I will discuss some future research along these directions.
Di Wang is an Assistant Professer of Computer Science at KAUST and the director of Privacy-Awareness, Responsibility and Trustworthy (PART) Lab. Before that, he obtained his Ph.D degree in the Department of Computer Science and Engineering at the State University of New York (SUNY) at Buffalo, and he obtained his BS and MS degrees in mathematics from Shandong University and the University of Western Ontario, respectively. His research areas include trushtorthy machine learning (differential privacy, fairness, interpretable machine learning), adversarial machine learning, robust estimation and high dimensional statistics.