Recent research showed that most of the existing machine learning algorithms are vulnerable to various privacy attacks. An effective way for defending these attacks is to enforce differential privacy during the learning process. As a rigorous scheme for privacy preserving, Differential Privacy has now become a standard for private data analysis. In this talk, I will present some recent developments in trustworthy machine learning from a differential privacy view. In the first part, I will talk about some pitfalls and trustworthy issues (such as resilience and fairness) in the current differentially private machine learning algorithms. Then I will discuss how to use the idea of differential privacy to other topics in trustworthy machine learning, such as adversarial machine learning and machine unlearning.
Di Wang is an Assistant Professor of Computer Science at KAUST and the director of Privacy-Awareness, Responsibility and Trustworthy (PART) Lab. Before that, he obtained his Ph.D degree in the Department of Computer Science and Engineering at the State University of New York (SUNY) at Buffalo, and he obtained his BS and MS degrees in mathematics from Shandong University and the University of Western Ontario, respectively. His research areas include trustworthy machine learning (differential privacy, fairness, interpretable machine learning), adversarial machine learning, robust estimation and high dimensional statistics.