M2KD: Incremental Learning via Multi-model and Multi-level Knowledge Distillation

Peng Zhou, Long Mai, Jianming Zhang, Ning Xu, Zuxuan Wu, Larry Davis
2020 British Machine Vision Conference  
Incremental learning targets at achieving good performance on new categories without forgetting old ones. Knowledge distillation has been shown critical in preserving the performance on old classes. Conventional methods, however, sequentially distill knowledge only from the penultimate model, leading to performance degradation on the old classes in later incremental learning steps. In this paper, we propose a multi-model and multi-level knowledge distillation strategy. Instead of sequentially
more » ... stilling knowledge only from the penultimate model, we directly leverage all previous model snapshots. In addition, we incorporate an auxiliary distillation to further preserve knowledge encoded at the intermediate feature levels. To make the model more memory efficient, we adapt mask based pruning to reconstruct all previous models with a small memory footprint. Experiments on standard incremental learning benchmarks show that our method improves the overall performance over standard distillation techniques.
dblp:conf/bmvc/ZhouMZXWD20 fatcat:sww5q3b4svde7amlc7ebfqsdqy