Deep learning

In this seminar, I will present some of the work I have done on Continual Deep Learning, among the research topics at the Vision-CAIR group. Continual learning aims to learn new tasks without forgetting previously learned ones. This is especially challenging when one cannot access data from previous tasks and when the model has a fixed capacity as adopted in modern deep learning techniques. Decreasing the gap towards human-level continual learning, we extended continual deep learning from multiple perspectives. The Hebb's learning theory from biology can be famously summarized as “Cells that fire together wire together.". Inspired by this theory from biology, we proposed Memory Aware Synapses (ECCV18) to quantify and reduce machine forgetting in a way that enables leveraging unlabeled data, which was not possible in former techniques. We later developed a Bayesian approach appearing at ICLR2020, where we explicitly modeled uncertainty parameters to orchestrates forgetting in continual learning. We showed in our ICLR2019 and ACCV18 works that task descriptors/ language can operate in continual learning visual tasks to improve learning efficiency and enable zero-shot task transfer. Beyond computer vision tasks, we recently developed an approach appearing at ICLR2020 we call "Compositional Language Continual Learning". We showed that disentangling syntax from semantics enables better compositional Seq2Seq learning and can significantly alleviate forgetting of tasks like machine translation. In the talk, I will go over these techniques and shed some light on future research possibilities.