A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Filters
Storing Encoded Episodes as Concepts for Continual Learning
[article]
2020
arXiv
pre-print
The two main challenges faced by continual learning approaches are catastrophic forgetting and memory limitations on the storage of data. ...
Reconstructed images from encoded episodes are replayed when training the classifier model on a new task to avoid catastrophic forgetting. ...
Implementaion Details We use the Pytorch deep learning framework (Paszke et al., 2019) for implementation and training of all neural network models. ...
arXiv:2007.06637v1
fatcat:2o256avtlfhwldlma4n23j4t7i
Rethinking Task-Incremental Learning Baselines
[article]
2022
arXiv
pre-print
The model needs to learn newly added capabilities (future tasks) while retaining the old knowledge (past tasks). Incremental learning has recently become increasingly appealing for this problem. ...
In this study, we present a simple yet effective adjustment network (SAN) for task incremental learning that achieves near state-of-the-art performance while using minimal architectural size without using ...
INTRODUCTION Task-incremental learning is a brain-inspired process to incrementally learn a set of tasks without forgetting previously known knowledge. ...
arXiv:2205.11367v1
fatcat:7kyuglq6tzcolhtivmyk2dmi7q
Class-Incremental Learning with Generative Classifiers
[article]
2021
arXiv
pre-print
Incrementally training deep neural networks to recognize new classes is a challenging problem. ...
Here, we put forward a new strategy for class-incremental learning: generative classification. ...
Acknowledgments We thank Siddharth Swaroop and Martin Mundt for useful comments. This research project has been supported by the Lifelong Learning Machines (L2M) program of the De- ...
arXiv:2104.10093v2
fatcat:ja3wqzaqh5asnai4iz37v2iiha
An Appraisal of Incremental Learning Methods
2020
Entropy
However, incremental learning remains a long term challenge. Modern deep neural network models achieve outstanding performance on stationary data distributions with batch training. ...
It is concluded that incremental learning is still a hot research area and will be for a long period. ...
[35] designed a deep convolutional neural network model that incrementally grows up with new tasks and the basic backbone was retained and shared for previous tasks. Peng et al. ...
doi:10.3390/e22111190
pmid:33286958
pmcid:PMC7712976
fatcat:5oyebvrcczdkrohfnk4ygmuwdi
SupportNet: a novel incremental learning framework through deep learning and support data
[article]
2018
bioRxiv
pre-print
the new data for further training so that the model can review the essential information of the old data when learning the new information. ...
learning methods and reaches similar performance as the deep learning model trained from scratch on both old and new data. ...
thus select the support data for each class, which will be shown to the deep learning model for future training to prevent the network from catastrophic forgetting. ...
doi:10.1101/317578
fatcat:mnxf3xpnefdf5py3xxgtrsmaz4
SpaceNet: Make Free Space For Continual Learning
[article]
2021
arXiv
pre-print
In this work, we propose a novel architectural-based method referred as SpaceNet for class incremental learning scenario where we utilize the available fixed capacity of the model intelligently. ...
SpaceNet trains sparse deep neural networks from scratch in an adaptive way that compresses the sparse connections of each task in a compact number of neurons. ...
SpaceNet Approach for Continual Learning In this section, we present our proposed method, SpaceNet, for deep neural networks to learn in the continual learning paradigm. ...
arXiv:2007.07617v2
fatcat:c5olxdxoszfinne7fintanq3ty
Cognitively Inspired Learning of Incremental Drifting Concepts
[article]
2021
arXiv
pre-print
Inspired by the nervous system learning mechanisms, we develop a computational model that enables a deep neural network to learn new concepts and expand its learned knowledge to new domains incrementally ...
This embedding space is modeled by internal data representations in a hidden network layer. ...
For the rest of the paper, we consider the base model f θ (t) to be a deep neural network with an increasing output size to encode incrementally observed classes. ...
arXiv:2110.04662v1
fatcat:55sswxlm3bgjrm5a5n2nipmyhy
Memory Efficient Experience Replay for Streaming Learning
[article]
2019
arXiv
pre-print
Streaming learning will cause conventional deep neural networks (DNNs) to fail for two reasons: 1) they need multiple passes through the entire dataset; and 2) non-iid data will cause catastrophic forgetting ...
While this works well for static settings, robots often operate in changing environments and must quickly learn new things from data streams. ...
Example images are provided in Fig. 3 . For input features, we use embeddings from the ResNet-50 [34] deep convolutional neural network (CNN) pre-trained on ImageNet-1K [66] . ...
arXiv:1809.05922v2
fatcat:hzidmbikyfhm7nanrj26xaxhuq
Adversarial Feature Alignment: Avoid Catastrophic Forgetting in Incremental Task Lifelong Learning
2019
Neural Computation
In this letter, we focus on the incremental multitask image classification scenario. ...
However, they either suffer from an accumulating drop in performance as the task sequence grows longer, or require storing an excessive number of model parameters for historical memory, or cannot obtain ...
Introduction Conventional deep neural network (DNN) models customized for specific data usually fail to handle other tasks, even when they share a lot in common. ...
doi:10.1162/neco_a_01232
pmid:31525313
fatcat:t35cnmv6mfc6pmxbwhuhv7kp6a
DIODE: Dilatable Incremental Object Detection
[article]
2021
arXiv
pre-print
On the contrary, conventional deep learning models lack this capability of preserving previously learned knowledge. ...
Thus, the performance of regularization-based incremental object detectors gradually decays for subsequent learning steps. ...
On the contrary, conventional deep learning models lack this capability of preserving previously learned knowledge. ...
arXiv:2108.05627v1
fatcat:v7drdcnu65f5lg7vjjgvz5i7sm
Evaluating Curriculum Learning Strategies in Neural Combinatorial Optimization
[article]
2020
arXiv
pre-print
Curriculum learning strategies have been shown helpful in increasing performance in the multi-task setting. ...
Neural combinatorial optimization (NCO) aims at designing problem-independent and efficient neural network-based strategies for solving combinatorial problems. ...
We also thank Irwan Bello from Google Brain as well as our lab colleagues, Magdalena Sobol and Angus Galloway, for their support in reviewing this manuscript. ...
arXiv:2011.06188v1
fatcat:wigwfobtprhe5h5qzdlahavm7i
An Incremental Class-Learning Approach with Acoustic Novelty Detection for Acoustic Event Recognition
2021
Sensors
In this study, a self-learning-based ASA for acoustic event recognition (AER) is presented to detect and incrementally learn novel acoustic events by tackling catastrophic forgetting. ...
For the extraction of deep audio representations, in addition to visual geometry group (VGG) and residual neural network (ResNet), time-delay neural network (TDNN) and TDNN based long short-term memory ...
Introduction Due to recent breakthroughs in deep learning and advancements in artificial intelligence, deep neural networks (DNNs), powerful learning models inspired by biological neural networks, have ...
doi:10.3390/s21196622
pmid:34640943
fatcat:jezml7fmeja4lkqfk3hg5mcsvq
Measuring Catastrophic Forgetting in Neural Networks
[article]
2017
arXiv
pre-print
Deep neural networks are used in many state-of-the-art systems for machine perception. ...
In this paper, we introduce new metrics and benchmarks for directly comparing five different mechanisms designed to mitigate catastrophic forgetting in neural networks: regularization, ensembling, rehearsal ...
Abitino was supported by NSF Research Experiences for Undergraduates (REU) award #1359361 to R. Dube. We thank NVIDIA for the generous donation of a Titan X GPU. ...
arXiv:1708.02072v4
fatcat:cxbakv2ggzcnfizequtwrye2sq
FearNet: Brain-Inspired Model for Incremental Learning
[article]
2018
arXiv
pre-print
FearNet uses a brain-inspired dual-memory system in which new memories are consolidated from a network for recent memories inspired by the mammalian hippocampal complex to a network for long-term storage ...
Incremental class learning involves sequentially learning classes in bursts of examples from the same class. ...
In this paper, we propose FearNet, a brain-inspired system for incrementally learning categories that significantly outperforms previous methods. ...
arXiv:1711.10563v2
fatcat:t2zsgaabtvgfhbsdegwayemmay
Brain-inspired replay for continual learning with artificial neural networks
2020
Nature Communications
in a class-incremental learning scenario. ...
Our method achieves state-of-the-art performance on challenging continual learning benchmarks (e.g., class-incremental learning on CIFAR-100) without storing data, and it provides a novel model for replay ...
Acknowledgements We thank Mengye Ren, Zhe Li and Máté Lengyel for comments on various parts of this work, and Johannes Oswald and Zhengwen Zeng for useful suggestions. ...
doi:10.1038/s41467-020-17866-2
pmid:32792531
pmcid:PMC7426273
fatcat:ndod2rwhjzg3dbradsb35b7z6e
« Previous
Showing results 1 — 15 out of 459 results