Deep Continual Learning

Event Start
Event End
Location
KAUST

Abstract

In this seminar, I will present some of the work I have done on Continual Deep Learning, among the research topics at the Vision-CAIR group. Continual learning aims to learn new tasks without forgetting previously learned ones. This is especially challenging when one cannot access data from previous tasks and when the model has a fixed capacity as adopted in modern deep learning techniques.  Decreasing the gap towards human-level continual learning, we extended continual deep learning from multiple perspectives. The Hebb's learning theory from biology can be famously summarized as “Cells that fire together wire together.". Inspired by this theory from biology, we proposed Memory Aware Synapses (ECCV18) to quantify and reduce machine forgetting in a way that enables leveraging unlabeled data, which was not possible in former techniques. We later developed a Bayesian approach appearing at ICLR2020, where we explicitly modeled uncertainty parameters to orchestrates forgetting in continual learning. We showed in our ICLR2019 and ACCV18 works that task descriptors/ language can operate in continual learning visual tasks to improve learning efficiency and enable zero-shot task transfer. Beyond computer vision tasks, we recently developed an approach appearing at ICLR2020 we call "Compositional Language Continual Learning". We showed that disentangling syntax from semantics enables better compositional Seq2Seq learning and can significantly alleviate forgetting of tasks like machine translation.  In the talk, I will go over these techniques and shed some light on future research possibilities. 

Brief Biography

Mohamed Elhoseiny is an Assistant Professor of Computer Science at the Visual Computing Center at KAUST. He received his Ph.D. from Rutgers University in October 2016 where he also spent time as a summer intern at SRI International in 2014 and at Adobe Research in 2015-2016. 

Since 2016, he spent more than two years at Facebook AI Research (FAIR) as a Postdoc Researcher at Facebook's headquarter in Menlopark California. After that in 2019, he spent several months at Baidu Silicon Valley AI Lab (SVAIL) till September 2019 and since then at Stanford University CS department as an affiliate and visiting faculty.  His primary research interests are in core machine learning and computer vision and mostly from a data-efficient deep learning perspective. His 6-years long development of the zero-shot learning (understanding unseen visual classes) was featured in the United Nations biodiversity conference in 2018 (~10,000 audiences from >192 countries). His creative AI research projects received recognition with the best paper award at the ECCV18 workshop on Fashion and Art, TV coverage at HBO Silicon Valley  Series (2018), press coverage at the New Scientist Magazine and MIT Tech review (2017, 2018), speech at the Facebook F8 conference (2018), highlighted in official FAIR video (2018). Recently in the first AI Artathon 2020, He served with Luba Elliot and Gene Kogan as a speaker, panelist, and a judge to help qualify 20 teams formed from 2000 original applicants with 6% qualifying rate. He will serve as an Area Chair at CVPR 2021.

Contact Person