Filters








5,994 Hits in 4.2 sec

Generative Feature Replay For Class-Incremental Learning [article]

Xialei Liu, Chenshen Wu, Mikel Menta, Luis Herranz, Bogdan Raducanu, Andrew D. Bagdanov, Shangling Jui, Joost van de Weijer
2020 arXiv   pre-print
We consider a class-incremental setting which means that the task-ID is unknown at inference time.  ...  To prevent forgetting, we combine generative feature replay in the classifier with feature distillation in the feature extractor.  ...  To summarize, our contributions are: • We design a hybrid model for class-incremental learning which combines generative feature replay at the classifier level and distillation in the feature extractor  ... 
arXiv:2004.09199v1 fatcat:j46srbkxxzcuhd2qbdkyjxspge

Class-Incremental Learning with Deep Generative Feature Replay for DNA Methylation-based Cancer Classification

Erdenebileg Batbaatar, Kwang Ho Park, Tsatsral Amarbayasgalan, Khishigsuren Davagdorj, Lkhagvadorj Munkhdalai, Van-Huy Pham, Keun Ho Ryu
2020 IEEE Access  
To our best knowledge, there are no studies conducted on incremental feature selection for highdimensional data with deep generative models for Class-IL tasks.  ...  Especially Class-IL [42] - [44] , which consists of learning sets of classes incrementally, techniques for cancer classification tasks are discussed in this paper.  ... 
doi:10.1109/access.2020.3039624 fatcat:rtfft26z3ncrtblw43xsra55ee

Always Be Dreaming: A New Approach for Data-Free Class-Incremental Learning [article]

James Smith, Yen-Chang Hsu, Jonathan Balloch, Yilin Shen, Hongxia Jin, Zsolt Kira
2021 arXiv   pre-print
In this work, we consider the high-impact problem of Data-Free Class-Incremental Learning (DFCIL), where an incremental learning agent must learn new concepts over time without storing generators or training  ...  One approach for DFCIL is to replay synthetic images produced by inverting a frozen copy of the learner's classification model, but we show this approach fails for common class-incremental benchmarks when  ...  Figure 1 : 1 The distribution of feature embeddings when using synthetic replay data for class-incremental learning.  ... 
arXiv:2106.09701v2 fatcat:kgs6ji3vmfd77kdnhaz5hl7axu

RODEO: Replay for Online Object Detection [article]

Manoj Acharya, Tyler L. Hayes, Christopher Kanan
2020 arXiv   pre-print
Humans can incrementally learn to do new visual detection tasks, which is a huge challenge for today's computer vision systems.  ...  Incrementally trained deep learning models lack backwards transfer to previously seen classes and suffer from a phenomenon known as "catastrophic forgetting."  ...  a fixed proposal generator (e.g., edge boxes) with distillation to incrementally learn classes.  ... 
arXiv:2008.06439v1 fatcat:ubeo5wga3jgdznaoeswx764imq

Class-Incremental Learning with Generative Classifiers [article]

Gido M. van de Ven, Zhe Li, Andreas S. Tolias
2021 arXiv   pre-print
Here, we put forward a new strategy for class-incremental learning: generative classification.  ...  Most existing class-incremental learning methods store data or use generative replay, both of which have drawbacks, while 'rehearsal-free' alternatives such as parameter regularization or bias-correction  ...  Acknowledgments We thank Siddharth Swaroop and Martin Mundt for useful comments. This research project has been supported by the Lifelong Learning Machines (L2M) program of the De-  ... 
arXiv:2104.10093v2 fatcat:ja3wqzaqh5asnai4iz37v2iiha

IB-DRR: Incremental Learning with Information-Back Discrete Representation Replay [article]

Jian Jiang, Edoardo Cetin, Oya Celiktutan
2021 arXiv   pre-print
Incremental learning aims to enable machine learning models to continuously acquire new knowledge given new classes, while maintaining the knowledge already learned for old classes.  ...  However, finding a trade-off between the model performance and the number of samples to save for each class is still an open problem for replay-based incremental learning and is increasingly desirable  ...  Their model called generative feature replay (GFR) consisted of a modified classifier that took latent representation as inputs, a feature extractor and a generator.  ... 
arXiv:2104.10588v1 fatcat:ykcoj3yhofhr7n4kfp6rhimgky

Reviewing continual learning from the perspective of human-level intelligence [article]

Yifan Chang, Wenbo Li, Jian Peng, Bo Tang, Yu Kang, Yinjie Lei, Yuanmiao Gui, Qing Zhu, Yu Liu, Haifeng Li
2021 arXiv   pre-print
Humans' continual learning (CL) ability is closely related to Stability Versus Plasticity Dilemma that describes how humans achieve ongoing learning capacity and preservation for learned information.  ...  Our main contributions concern i) rechecking CL from the level of artificial general intelligence; ii) providing a detailed and extensive overview on CL topics; iii) presenting some novel ideas on the  ...  The semantic information for novel classes is generated by the combination of multiple embedding features and global features trained via episodic training.  ... 
arXiv:2111.11964v1 fatcat:je5lyidbongfxj4v67zxs2a3bi

Generative Negative Replay for Continual Learning [article]

Gabriele Graffieti, Davide Maltoni, Lorenzo Pellegrini, Vincenzo Lomonaco
2022 arXiv   pre-print
The proposed approach is validated on complex class-incremental and data-incremental continual learning scenarios (CORe50 and ImageNet-1000) composed of high-dimensional data and a large number of training  ...  In this paper, we show that, while the generated data are usually not able to improve the classification accuracy for the old classes, they can be effective as negative examples (or antagonists) to better  ...  New classes (NC) also known as class-incremental learning (Class-IL), where each experience includes data of classes not present in any other past experience.  ... 
arXiv:2204.05842v1 fatcat:42cvaenekfe3behdvdomf37gya

Variational Prototype Replays for Continual Learning [article]

Mengmi Zhang, Tao Wang, Joo Hwee Lim, Gabriel Kreiman, Jiashi Feng
2020 arXiv   pre-print
To alleviate catastrophic forgetting, our method replays one sample per class from previous tasks, and correspondingly matches newly predicted embeddings to their nearest class-representative prototypes  ...  In this work, we consider few-shot continual learning in classification tasks, and we propose a novel method, Variational Prototype Replays, that efficiently consolidates and recalls previous knowledge  ...  In contrast, our method improves generalization by learning class prototypical distributions with their means and variances in the latent space, which can generate multiple samples during replays.  ... 
arXiv:1905.09447v3 fatcat:lcmcsfbulfg3va4arp5bh4f2im

Learning to Remember: A Synaptic Plasticity Driven Framework for Continual Learning [article]

Oleksiy Ostapenko, Mihai Puscas, Tassilo Klein, Patrick Jähnichen, Moin Nabi
2019 arXiv   pre-print
The amount of added capacity is determined dynamically from the learned binary mask. We evaluate DGM in the continual class-incremental setup on visual classification tasks.  ...  In order to tackle these challenges, we introduce Dynamic Generative Memory (DGM) - a synaptic plasticity driven framework for continual learning.  ...  Notably, a method designed for class-incremental learning can be generally applied in a task-incremental setup.  ... 
arXiv:1904.03137v4 fatcat:kd3bvn263vcz5krnxzstkjwbiq

Reduce the Difficulty of Incremental Learning with Self-Supervised Learning

Linting Guan, Yan Wu
2021 IEEE Access  
As we can see in the class-IL, adding self-supervised learning to different replay-based incremental learning methods has a general improvement in classification accuracy.  ...  In the replay-based incremental learning algorithm, we can generate self-supervised signals from new task samples or/and cached samples.  ... 
doi:10.1109/access.2021.3112745 fatcat:t4rcoxfld5hnpj7pkf4udvhjd4

Hypothesis-driven Online Video Stream Learning with Augmented Memory [article]

Mengmi Zhang, Rohil Badkundri, Morgan B. Talbot, Rushikesh Zawar, Gabriel Kreiman
2021 arXiv   pre-print
Given a lack of online incremental class learning datasets on video streams, we introduce and adapt two additional video datasets, Toybox and iLab, for online stream learning.  ...  Second, hypotheses in the augmented memory can be re-used for learning new tasks, improving generalization and transfer learning ability.  ...  In the incremental class setting, for each task, the model has to learn to classify 2 new classes while training for a single epoch.  ... 
arXiv:2104.02206v4 fatcat:2yi3jeo32ndnno2cs6m7tr76ka

REMIND Your Neural Network to Prevent Catastrophic Forgetting [article]

Tyler L. Hayes, Kushal Kafle, Robik Shrestha, Manoj Acharya, Christopher Kanan
2020 arXiv   pre-print
Under the same constraints, REMIND outperforms other methods for incremental class learning on the ImageNet ILSVRC-2012 dataset.  ...  We demonstrate REMIND's generality by pioneering online learning for Visual Question Answering (VQA).  ...  For ImageNet, all recent state-of-the-art methods for incremental class learning use replay of raw pixels with distillation loss.  ... 
arXiv:1910.02509v3 fatcat:ue6klyjz4ngbtkpk45tpsfxdae

Learning to Segment the Tail [article]

Xinting Hu, Yi Jiang, Kaihua Tang, Jingyuan Chen, Chunyan Miao, Hanwang Zhang
2020 arXiv   pre-print
We also propose to use a meta-module for new-class learning, where the module parameters are shared across incremental phases, gaining the learning-to-learn knowledge incrementally, from the data-rich  ...  This derives a novel learning paradigm: class-incremental few-shot learning, which is especially effective for the challenge evolving over time: 1) the class imbalance among the old-class knowledge review  ...  We thank all the reviewers for their constructive comments. This work was supported by Alibaba-NTU JRI, and partly supported by Major Scientific Research Project of Zhejiang Lab (No. 2019DB0ZX01).  ... 
arXiv:2004.00900v2 fatcat:f4ucuwxnxfay3ct3igzctetady

Generative Feature Replay with Orthogonal Weight Modification for Continual Learning [article]

Gehui Shen, Song Zhang, Xiang Chen, Zhi-Hong Deng
2020 arXiv   pre-print
In this paper we focus on class incremental learning, a challenging CL scenario.  ...  For this scenario, generative replay is a promising strategy which generates and replays pseudo data for previous tasks to alleviate catastrophic forgetting.  ...  learning, domain incremental learning and class incremental learning (CIL).  ... 
arXiv:2005.03490v3 fatcat:4p5e57kwwfcuxmvptfiky4ryx4
« Previous Showing results 1 — 15 out of 5,994 results