Filters








1,195 Hits in 5.6 sec

Task-Balanced Batch Normalization for Exemplar-based Class-Incremental Learning [article]

Sungmin Cha, Soonwon Hong, Moontae Lee, Taesup Moon
2022 arXiv   pre-print
It has been widely used in continual learning scenarios with little discussion, but we find that BN should be carefully applied, particularly for the exemplar memory based class incremental learning (CIL  ...  Batch Normalization (BN) is an essential layer for training neural network models in various computer vision tasks.  ...  Concluding Remarks We propose a simple but effective method, called as Task-Balance Batch Normalization, for exemplar-based CIL.  ... 
arXiv:2201.12559v2 fatcat:65kp6xv72rdwliq7bffuky2t6a

An EM Framework for Online Incremental Learning of Semantic Segmentation [article]

Shipeng Yan, Jiale Zhou, Jiangwei Xie, Songyang Zhang, Xuming He
2021 arXiv   pre-print
incremental learning step that balances the stability-plasticity of the model.  ...  However, it remains challenging to acquire novel classes in an online fashion for the segmentation task, mainly due to its continuously-evolving semantic label space, partial pixelwise ground-truth annotations  ...  Class-balanced Exemplar Selection.  ... 
arXiv:2108.03613v1 fatcat:4nkv33yszjc5pmrgtxc5mrqnrq

Online Continual Learning For Visual Food Classification [article]

Jiangpeng He, Fengqing Zhu
2021 arXiv   pre-print
, and (2) an effective online learning regime using balanced training batch along with the knowledge distillation on augmented exemplars to maintain the model performance on all learned classes.  ...  In this paper, we address these issues by introducing (1) a novel clustering based exemplar selection algorithm to store the most representative data belonging to each learned food for knowledge replay  ...  Regularization-based methods restrict the impact of learning new tasks on the parameters that are important for learned tasks.  ... 
arXiv:2108.06781v1 fatcat:cnjyv7xcyffo5imylmy3cm4ueu

Incremental SAR Automatic Target Recognition with Error Correction and High Plasticity

Jiaxin Tang, Deliang Xiang, Fan Zhang, Fei Ma, Yongsheng Zhou, Hengchao Li
2022 IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing  
The existing deep learning-based SAR ATR methods usually predefine the total number of recognition classes. In realistic applications, the new tasks/classes will be added continuously.  ...  For SAR ATR, deep learning gradually emerges as a powerful tool and achieves promising performance. However, it faces serious challenges of how to deal with incremental recognition scenarios.  ...  The training batches consisting of newly added data and selected exemplars will be balanced before it is used to train low error parameter space for (t-1)-th task low error parameter space for t-th task  ... 
doi:10.1109/jstars.2022.3141485 fatcat:e6w7lhosvvhwvmzfnfpm5jbqiu

Continual Prototype Evolution: Learning Online from Non-Stationary Data Streams [article]

Matthias De Lange, Tinne Tuytelaars
2021 arXiv   pre-print
Besides nearest neighbor based prediction, learning is facilitated by a novel objective function, encouraging cluster density about the class prototype and increased inter-class variance.  ...  Furthermore, the latent space quality is elevated by pseudo-prototypes in each batch, constituted by replay of exemplars from memory.  ...  Task, class, and domain incremental learning are based on the composition in the learner for the observable stream subset in horizon D t , which is incrementally replaced by a new subset of data for the  ... 
arXiv:2009.00919v4 fatcat:xcdrovmq7rgilf3hlin7j5tnqu

Essentials for Class Incremental Learning [article]

Sudhanshu Mittal and Silvio Galesso and Thomas Brox
2021 arXiv   pre-print
We show that a combination of simple components and a loss that balances intra-task and inter-task learning can already resolve forgetting to the same extent as more complex measures proposed in literature  ...  Moreover, we identify poor quality of the learned representation as another reason for catastrophic forgetting in class-IL.  ...  For inter-task learn- /* add new class exemplars */ ing, we plan a balanced interaction between the samples 4 for i = s + 1, ..., t do of old and new classes  ... 
arXiv:2102.09517v1 fatcat:li3tvwjwanfbnn52dx6lhi42im

M2KD: Incremental Learning via Multi-model and Multi-level Knowledge Distillation

Peng Zhou, Long Mai, Jianming Zhang, Ning Xu, Zuxuan Wu, Larry Davis
2020 British Machine Vision Conference  
Conventional methods, however, sequentially distill knowledge only from the penultimate model, leading to performance degradation on the old classes in later incremental learning steps.  ...  Incremental learning targets at achieving good performance on new categories without forgetting old ones. Knowledge distillation has been shown critical in preserving the performance on old classes.  ...  Samples from a batch of new classes C k are added at the k-th incremental step. For instance, 20 classes will be added per incremental step in a 20-class batch setting.  ... 
dblp:conf/bmvc/ZhouMZXWD20 fatcat:sww5q3b4svde7amlc7ebfqsdqy

Online Continual Learning Via Candidates Voting [article]

Jiangpeng He, Fengqing Zhu
2021 arXiv   pre-print
In this work, we introduce an effective and memory-efficient method for online continual learning under class-incremental setting through candidates selection from each learned task together with prior  ...  Particularly, performance struggles with increased number of tasks or additional classes to learn for each task.  ...  Regularization-based methods restrict the impact of learning new tasks on the parameters that are important for learned tasks.  ... 
arXiv:2110.08855v1 fatcat:gj5x52v74fe5tijmi4r4zsws3q

M2KD: Multi-model and Multi-level Knowledge Distillation for Incremental Learning [article]

Peng Zhou, Long Mai, Jianming Zhang, Ning Xu, Zuxuan Wu, Larry S. Davis
2020 arXiv   pre-print
Conventional methods, however, sequentially distill knowledge only from the last model, leading to performance degradation on the old classes in later incremental learning steps.  ...  Incremental learning targets at achieving good performance on new categories without forgetting old ones. Knowledge distillation has been shown critical in preserving the performance on old classes.  ...  Samples from a batch of new classes C k are added at the k-th incremental step. For instance, 20 classes will be added per incremental step in a 20-class batch setting.  ... 
arXiv:1904.01769v2 fatcat:aydglsf6zjfjbcedluebkcmkyu

iTAML: An Incremental Task-Agnostic Meta-learning Approach

Jathushan Rajasegaran, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan, Mubarak Shah
2020 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)  
Specifically, on large-scale datasets that generally prove difficult cases for incremental learning, our approach delivers absolute gains as high as 19.1% and 7.4% on Ima-geNet and MS-Celeb datasets, respectively  ...  We perform extensive experiments on five datasets in a class-incremental setting, leading to significant improvements over the state of the art methods (e.g., a 21.3% boost on CIFAR100 with 10 incremental  ...  (forgetting), and a stable network can impede Figure 1 : We propose a task-agnostic meta-learning approach for class-incremental learning.  ... 
doi:10.1109/cvpr42600.2020.01360 dblp:conf/cvpr/RajasegaranKHKS20 fatcat:3n6frqycnfdfve66owocqrd3pe

SS-IL: Separated Softmax for Incremental Learning [article]

Hongjoon Ahn, Jihwan Kwak, Subin Lim, Hyeonsu Bang, Hyojun Kim, Taesup Moon
2022 arXiv   pre-print
We consider class incremental learning (CIL) problem, in which a learning agent continuously learns new classes from incrementally arriving training data batches and aims to predict well on all the classes  ...  Then, we propose a new method, dubbed as Separated Softmax for Incremental Learning (SS-IL), that consists of separated softmax (SS) output layer combined with task-wise knowledge distillation (TKD) to  ...  During learning each incremental task, we assume a separate exemplar-memory M is allocated to store exemplar data for old classes.  ... 
arXiv:2003.13947v3 fatcat:mz57yx2deraxjh4nxdg5byzw3m

Self-distilled Knowledge Delegator for Exemplar-free Class Incremental Learning [article]

Fanfan Ye, Liang Ma, Qiaoyong Zhong, Di Xie, Shiliang Pu
2022 arXiv   pre-print
Exemplar-free incremental learning is extremely challenging due to inaccessibility of data from old tasks.  ...  This simple incremental learning framework surpasses existing exemplar-free methods by a large margin on four widely used class incremental benchmarks, namely CIFAR-100, ImageNet-Subset, Caltech-101 and  ...  An overview of the proposed exemplar-free class incremental learning framework.  ... 
arXiv:2205.11071v1 fatcat:3cibazadgzhv3j4evjnaw5pxja

Dual-Teacher Class-Incremental Learning With Data-Free Generative Replay [article]

Yoojin Choi, Mostafa El-Khamy, Jungwon Lee
2021 arXiv   pre-print
In CIL, we use DT-ID to learn new classes incrementally based on the pre-trained model for old classes and another model (pre-)trained on the new data for new classes.  ...  This paper proposes two novel knowledge transfer techniques for class-incremental learning (CIL).  ...  Figure 8 : 8 50 base classes + 25 new classes per task (b) 50 base classes + 5 new classes per task (c) 10 base classes + 10 new classes per task Average accuracy in CIL on ImageNet-Subset for different  ... 
arXiv:2106.09835v1 fatcat:bsrxstzvsrdbtb7nml2qow34bi

Self-Sustaining Representation Expansion for Non-Exemplar Class-Incremental Learning [article]

Kai Zhu, Wei Zhai, Yang Cao, Jiebo Luo, Zheng-Jun Zha
2022 arXiv   pre-print
Non-exemplar class-incremental learning is to recognize both the old and new classes when old class samples cannot be saved.  ...  It is a challenging task since representation optimization and feature retention can only be achieved under supervision from new classes.  ...  The techniques on exemplar and distillation in rehearsal-based methods are widely used in class-incremental learning.  ... 
arXiv:2203.06359v2 fatcat:ewtnezpqz5bsxlop6kanwydiui

iTAML: An Incremental Task-Agnostic Meta-learning Approach [article]

Jathushan Rajasegaran, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan, Mubarak Shah
2020 arXiv   pre-print
Specifically, on large-scale datasets that generally prove difficult cases for incremental learning, our approach delivers absolute gains as high as 19.1% and 7.4% on ImageNet and MS-Celeb datasets, respectively  ...  We perform extensive experiments on five datasets in a class-incremental setting, leading to significant improvements over the state of the art methods (e.g., a 21.3% boost on CIFAR100 with 10 incremental  ...  Incremental Task Agnostic Meta-learning We progressively learn a total of T tasks, with U number of classes per task.  ... 
arXiv:2003.11652v1 fatcat:q3uierncg5fn7fy66rzkgpef5q
« Previous Showing results 1 — 15 out of 1,195 results