22/11/2021

FFNB: Forgetting-Free Neural Blocks for Deep Continual Learning

Hichem Sahbi, Haoming Zhan

Keywords: Continual and incremental learning, lifelong learning, catastrophic interference, catastrophic forgetting, dynamic neural networks, visual recognition

Abstract: Deep neural networks (DNNs) have recently achieved a great success in computer vision and several related fields. Despite such progress, current neural architectures still suffer from catastrophic interference (a.k.a. forgetting) which obstructs DNNs to learn continually. While several state-of-the-art methods have been proposed to mitigate forgetting, these existing solutions are either highly rigid (as regularization) or time/memory demanding (as replay). An intermediate class of methods, based on dynamic networks, has been proposed in the literature and provides a reasonable balance between task memorization and computational footprint. In this paper, we devise a dynamic network architecture for continual learning based on a novel forgetting-free neural block (FFNB). Training FFNB features on new tasks is achieved using a novel procedure that constrains the underlying parameters in the null-space of the previous tasks, while training classifier parameters equates to Fisher discriminant analysis. The latter provides an effective incremental process which is also optimal from a Bayesian perspective. The trained features and classifiers are further enhanced using an incremental "end-to-end" fine-tuning. Extensive experiments, conducted on different challenging classification problems, show the high effectiveness of the proposed method.

 0
 0
 0
 0
This is an embedded video. Talk and the respective paper are published at BMVC 2021 virtual conference. If you are one of the authors of the paper and want to manage your upload, see the question "My papertalk has been externally embedded..." in the FAQ section.

Comments

Post Comment
no comments yet
code of conduct: tbd Characters remaining: 140

Similar Papers