Improving Interpretation Faithfulness for Transformers

Event Start
Event End
Location
Building 9, Level 2, Room 2325, Hall 2

Abstract

Currently, attention mechanism becomes a standard fixture in most state-of-the-art NLP, Vision and GNN models, not only due to outstanding performance it could gain, but also due to plausible innate explanation for the behaviors of neural architectures it provides, which is notoriously difficult to analyze. However, recent studies show that attention is unstable against randomness and perturbations during training or testing, such as random seeds and slight perturbation of input or embedding vectors, which impedes it from becoming a faithful explanation tool. Thus, a natural question is whether we can find some substitute of the current attention which is more stable and could keep the most important characteristics on explanation and prediction of attention. In this talk, I will present some our current works on improving interpretation faithfulness for transformers for different types of data: text, images and graphs.

Brief Biography

Di Wang is currently an Assistant Professor of Computer Science and Adjunct Professor of Statistics at the King Abdullah University of Science and Technology (KAUST). Before that, he got his PhD degree in the Computer Science and Engineering at the State University of New York (SUNY) at Buffalo. And he obtained his BS and MS degrees in mathematics from Shandong University and the University of Western Ontario, respectively. His research areas include privacy-preserving machine learning, interpretability, machine learning theory, and trustworthy machine learning.

Contact Person