Towards Self-explainable Deep Learning Models
- Prof. Michael Kampffmeyer, UiT The Arctic University of Norway
B1 L4 R 4102
Despite the significant advancements deep learning models have brought to solving complex problems in the real world, their lack of transparency remains a significant barrier, particularly in deploying them within safety-critical contexts.
Overview
Abstract
Despite the significant advancements deep learning models have brought to solving complex problems in the real world, their lack of transparency remains a significant barrier, particularly in deploying them within safety-critical contexts. This has led to an increased focus on explainable artificial intelligence (XAI), which seeks to demystify a model's decisions to increase trustworthiness. Within the XAI domain, two primary approaches have emerged: one that retroactively explains a model's predictions (post-hoc explanations), and another that integrates explanations into the model's output (self-explanations). In this talk, we will delve into the latter, highlighting the development and potential of inherently self-explanatory models.
Brief Biography
Michael Kampffmeyer is an Associate Professor at UiT The Arctic University of Norway. He is also a Senior Research Scientist II at the Norwegian Computing Center in Oslo. His research interests include medical image analysis, explainable AI, and learning from limited labels (e.g. clustering, few/zero-shot learning, domain adaptation and self-supervised learning). Kampffmeyer received his PhD degree from UiT in 2018. He has had long-term research stays in the Machine Learning Department at Carnegie Mellon University and the Berlin Center for Machine Learning at the Technical University of Berlin. He is a general chair of the annual Northern Lights Deep Learning Conference, NLDL