" id="header">

Publications

Semantics-Driven Generative Replay for Few-Shot Class Incremental Learning

ACM International Conference on Multimedia

Publication date: October 10, 2022

Aishwarya Agarwal, Biplab Banerjee, Fabio Cuzzolin, Subhasis Chaudhuri

We deal with the problem of few-shot class incremental learning (FSCIL), which requires a model to continuously recognize new categories for which limited training data are available. Existing FSCIL methods depend on prior knowledge to regularize the model parameters for combating catastrophic forgetting. Devising an effective prior in a low-data regime, however, is not trivial. The memory-replay based approaches from the fully-supervised class incremental learning (CIL) literature cannot be used directly for FSCIL as the generative memory-replay modules of CIL are hard to train from few training samples. However, generative replay can tackle both the stability and plasticity of the models simultaneously by generating a large number of class-conditional samples. Convinced by this fact, we propose a generative modeling-based FSCIL framework using the paradigm of memory-replay in which a novel conditional few-shot generative adversarial network (GAN) is incrementally trained to produce visual features while ensuring the stability-plasticity trade-off through novel loss functions and combating the mode-collapse problem effectively. Furthermore, the class-specific synthesized visual features from the few-shot GAN are constrained to match the respective latent semantic prototypes obtained from a well-defined semantic space. We find that the advantages of this semantic restriction is two-fold, in dealing with forgetting, while making the features class-discernible. The model requires a single per-class prototype vector to be maintained in a dynamic memory buffer. Experimental results on the benchmark and large-scale CiFAR-100, CUB-200, and Mini-ImageNet confirm the superiority of our model over the current FSCIL state of the art.

Learn More