Differential Privacy (DP) allows for rich statistical and machine learning analysis, and is now becoming a gold standard for private data analysis. Despite the noticeable success of this theory, existing tools from DP are severely limited to regular datasets, e.g., datasets need to be or are assumed to be clean and normalized before performing DP algorithms. However, compared with regular data, irregular data is more ubiquitous in modern statistical analysis and machine learning problems. In this talk, I will discuss some recent developments of DP learning with one type of irregular data, i.e., heavy-tailed data. Specifically, I will talk about the Empirical Risk Minimization problem (ERM), Sparse Linear Regression, Latent Variable Models and Multi-Armed Bandits (MAB) and show the gaps between the private and the non-private case and the gaps between regular data and heavy-tailed data. Finally, I will also mention some future directions.
Di Wang is currently an Assistant Professor at the King Abdullah University of Science and Technology (KAUST). Before that, he got his PhD degree in the Computer Science and Engineering at the State University of New York (SUNY) at Buffalo. And he obtained his BS and MS degrees in mathematics from Shandong University and the University of Western Ontario, respectively. During his PhD studies, he has been invited as a visiting student to the University of California, Berkeley, Harvard University, and Boston University. His research areas include differentially private machine learning, adversarial machine learning, interpretable machine learning, robust estimation and optimization. He has received the SEAS Dean’s Graduate Achievement Award and the Best CSE Graduate Research Award from SUNY Buffalo.