By David Murphy
Di Wang, assistant professor of computer science, joined KAUST in January 2021 to take up his new role as principal investigator of the KAUST Privacy-Awareness, Responsibility and Trustworthy (PART) Lab. Wang’s research interests include machine learning (ML), security, theoretical computer science, and data mining.
Prior to joining KAUST, he obtained his Ph.D. degree in computer science and engineering ('20) from the State University of New York (SUNY) at Buffalo, U.S.; his M.S. in mathematics ('15) from the University of Western Ontario, Canada; and his B.S. in mathematics and applied mathematics ('14) from Shandong University, China.
Wang’s interest in trustworthy ML was first piqued during his Ph.D. studies at SUNY. Not long after enrolling in an ML learning course at the university, he found what would turn out to be his future core research topic, namely the societal concerns within ML.
“Generally speaking, my research focuses on solving issues and societal concerns arising from ML and data mining algorithms, such as privacy, fairness, robustness, transferability, and transparency,” he explained.
“My PART team members and I intend to develop learning algorithms that are not only accurate but are also private, fair, explainable, and robust. We also expect to provide rigorous mathematical and/or cryptographical guarantees of these algorithms.
“Since trustworthy ML is closely related to other fields, such as biomedicine and healthcare, I realize that working in such a collaborative research environment can enable me to produce impactful work. I see KAUST as a highly disciplinary research environment where I can take a quantum leap in my career.”
Understanding the role of modern machine learning algorithms
Despite their increased deployment and ability to shape the world we live in, ML algorithms also result in reliable problems, primarily when used to deal with critical or sensitive societal issues. For example, whether a patient should be administered a new drug or not, or an applicant should be approved for a loan, or whether an autonomous vehicle should stop in a particular scenario.
With these issues at the forefront of his thinking, Wang believes the responsibility lies firmly on the shoulders of researchers and practitioners regarding the informed application and monitoring of ML algorithms.
“These decisions are: a) preservation of privacy, b) resilient to corruption, irregular or inconsistent training data, and c) making fair, accountable, and/or transparent decisions,” he explained. “The goal of my research at KAUST is to mitigate these trustworthy issues and societal concerns in ML algorithms. To ensure learning algorithms are responsible and trustworthy.
“My research includes three perspectives: theory, practice, and system. For the theoretical part, our first focal point is providing rigorous mathematical guarantees for our algorithms. We also propose developing algorithms for ML problems in next-gen computing-quantum computing. For the practical part, we aim to develop trustworthy learning algorithms for biomedical, healthcare, genetic, and social data. And finally, we will focus on deploying trustworthy learning systems for healthcare and other applicable industries,” he concluded.