22/11/2021

Pseudo-Labeling for Class Incremental Learning

Alexis Lechat, Stephane Herbin, Frederic Jurie

Keywords: incremental learning, catastrophic forgetting, semi-supervised learning, pseudo-labeling, consistency regularization

Abstract: Class Incremental Learning (CIL) consists in training a model iteratively with limited amount of data from few classes that will never be seen again, resulting in catastrophic forgetting and lack of diversity. In this paper, we address these phenomena by assuming that, during incremental learning, additional unlabeled data are continually available, and propose a Pseudo-Labeling approach for class incremental learning (PLCiL) that makes use of a new adapted loss. We demonstrate that our method achieves better performance than supervised or other semi-supervised methods on standard class incremental benchmarks (CIFAR-100 and ImageNet-100) even when a self-supervised pre-training step using a large set of data is used as initialization. We also illustrate the advantages of our method in a more complex context with fewer labels.

 0
 0
 0
 0
This is an embedded video. Talk and the respective paper are published at BMVC 2021 virtual conference. If you are one of the authors of the paper and want to manage your upload, see the question "My papertalk has been externally embedded..." in the FAQ section.

Comments

Post Comment
no comments yet
code of conduct: tbd

Similar Papers