Skip to main content
Computer, Electrical and Mathematical Sciences and Engineering
CEMSE
Computer, Electrical and Mathematical Sciences and Engineering
Home
Study
Prospective Students
Current Students
Internship Opportunities
Research
Research Overview
Research Areas
Research Groups
Programs
Applied Mathematics and Computational Sciences
Computer Science
Electrical and Computer Engineering
Statistics
People
All People
Faculty
Affiliate Faculty
Instructional Faculty
Research Scientists
Research Staff
Postdoctoral Fellows
Students
Alumni
Administrative Staff
News
Events
About
Who We Are
Message from the Dean
Leadership Team
Apply
transformers
Improving Interpretation Faithfulness for Transformers
Di Wang, Assistant Professor, Computer Science
Nov 20, 11:30
-
12:30
B9 L2 H2 H2
transformers
nlp
interpretation faithfulness
Currently, attention mechanism becomes a standard fixture in most state-of-the-art NLP, Vision and GNN models, not only due to outstanding performance it could gain, but also due to plausible innate explanation for the behaviors of neural architectures it provides, which is notoriously difficult to analyze. However, recent studies show that attention is unstable against randomness and perturbations during training or testing, such as random seeds and slight perturbation of input or embedding vectors, which impedes it from becoming a faithful explanation tool. Thus, a natural question is whether we can find some substitute of the current attention which is more stable and could keep the most important characteristics on explanation and prediction of attention.