Filters








1,261 Hits in 5.2 sec

An Empirical Investigation of Catastrophic Forgetting in Gradient-Based Neural Networks [article]

Ian J. Goodfellow, Mehdi Mirza, Da Xiao, Aaron Courville, Yoshua Bengio
2015 arXiv   pre-print
Here, we investigate the extent to which the catastrophic forgetting problem occurs for modern neural networks, comparing both established and recent gradient-based training algorithms and activation functions  ...  We also examine the effect of the relationship between the first task and the second task on catastrophic forgetting.  ...  Ian Goodfellow is supported by the 2013 Google Fellowship in Deep Learning.  ... 
arXiv:1312.6211v3 fatcat:2zg36q2e3nd4jorrknmhk47ppm

Wide Neural Networks Forget Less Catastrophically [article]

Seyed Iman Mirzadeh, Arslan Chaudhry, Dong Yin, Huiyi Hu, Razvan Pascanu, Dilan Gorur, Mehrdad Farajtabar
2022 arXiv   pre-print
While the recent progress in continual learning literature is encouraging, our understanding of what properties of neural networks contribute to catastrophic forgetting is still limited.  ...  To address this, instead of focusing on continual learning algorithms, in this work, we focus on the model itself and study the impact of "width" of the neural network architecture on catastrophic forgetting  ...  Recently, a lot of algorithmic progress has been made in mitigating catastrophic forgetting in neural networks. The progress can broadly be classified into three categories.  ... 
arXiv:2110.11526v2 fatcat:s46xhe4yfjc33a7zgtvr6aea7y

Weight Friction: A Simple Method to Overcome Catastrophic Forgetting and Enable Continual Learning [article]

Gabrielle K. Liu
2019 arXiv   pre-print
In this research, we propose a simple method to overcome catastrophic forgetting and enable continual learning in neural networks.  ...  In recent years, deep neural networks have found success in replicating human-level cognitive skills, yet they suffer from several major obstacles.  ...  Conclusions In this research, we addressed the problem of catastrophic forgetting in neural networks, which hinders continual learning.  ... 
arXiv:1908.01052v2 fatcat:sv6wtizqyjbn3nytmurnvqysua

An Empirical Study of Example Forgetting during Deep Neural Network Learning [article]

Mariya Toneva, Alessandro Sordoni, Remi Tachet des Combes, Adam Trischler, Yoshua Bengio, Geoffrey J. Gordon
2019 arXiv   pre-print
Inspired by the phenomenon of catastrophic forgetting, we investigate the learning dynamics of neural networks as they train on single classification tasks.  ...  ; and (iii) based on forgetting dynamics, a significant fraction of examples can be omitted from the training data set while still maintaining state-of-the-art generalization performance.  ...  Figure 7 CONCLUSION AND FUTURE WORK In this paper, inspired by the phenomenon of catastrophic forgetting, we investigate the learning dynamics of neural networks when training on single classification  ... 
arXiv:1812.05159v3 fatcat:j75n5csi6jegjiko4d5bkiwtg4

Reducing Catastrophic Forgetting in Modular Neural Networks by Dynamic Information Balancing [article]

Mohammed Amer, Tomás Maul
2019 arXiv   pre-print
However, neural networks suffer from catastrophic forgetting when stressed with the challenge of continual learning.  ...  We investigate how to exploit modular topology in neural networks in order to dynamically balance the information load between different modules by routing inputs based on the information content in each  ...  Our main contributions in this paper are: • Introducing and investigating the idea of dynamic information balancing (DIB) between different modules in an ANN as a way of alleviating catastrophic forgetting  ... 
arXiv:1912.04508v1 fatcat:pftl4lqgfrardfipsc5zl4jsly

Behavioral Experiments for Understanding Catastrophic Forgetting [article]

Samuel J. Bell, Neil D. Lawrence
2021 arXiv   pre-print
We apply the techniques of experimental psychology to investigating catastrophic forgetting in neural networks.  ...  Alongside our empirical findings, we demonstrate an alternative, behavior-first approach to investigating neural network phenomena.  ...  While we have an empirical focus in common, our aims, however, fundamentally differ: rather than investigating neural networks in search of psychological plausibility, here we use the methods of psychology  ... 
arXiv:2110.10570v2 fatcat:hhj4d5xxnbgblcrx37za3bb7ve

On Catastrophic Forgetting and Mode Collapse in Generative Adversarial Networks [article]

Hoang Thanh-Tung, Truyen Tran
2020 arXiv   pre-print
In this paper, we show that Generative Adversarial Networks (GANs) suffer from catastrophic forgetting even when they are trained to approximate a single target distribution.  ...  The level of mismatch between tasks in the sequence determines the level of forgetting. Catastrophic forgetting is interrelated to mode collapse and can make the training of GANs non-convergent.  ...  Catastrophic forgetting (CF) in artificial neural networks [4, 5, 6] is the problem where the knowledge of previously learned tasks is abruptly destroyed by the learning of the current task.  ... 
arXiv:1807.04015v8 fatcat:tvvsrs3vmzcxhds5gteoujd34m

Meta-learnt priors slow down catastrophic forgetting in neural networks [article]

Giacomo Spigler
2020 arXiv   pre-print
Here we show that catastrophic forgetting can be mitigated in a meta-learning context, by exposing a neural network to multiple tasks in a sequential manner during training.  ...  (i.e., feature detectors or policies) that could be helpful to solve other tasks, and to limit future interference with the acquired knowledge, and thus catastrophic forgetting.  ...  In this paper, we focus on the problem of catastrophic forgetting, that is the tendency of biological and artificial neural networks to rapidly forget previously learnt knowledge upon learning new information  ... 
arXiv:1909.04170v2 fatcat:lf7djggjwbhafcjkl6cglrde6e

Is Fast Adaptation All You Need? [article]

Khurram Javed, Hengshuai Yao, Martha White
2019 arXiv   pre-print
In this paper, we investigate a different training signal -- robustness to catastrophic interference -- and demonstrate that representations learned by directing minimizing interference are more conducive  ...  Gradient-based meta-learning has proven to be highly effective at learning model initializations, representations, and update rules that allow fast adaptation from a few samples.  ...  Based on the detected task, an agent might choose to use a different neural network as model. Such a task selection mechanism may make reducing interference less important.  ... 
arXiv:1910.01705v1 fatcat:kv7wbmtcirbvzmbkhov2kvorti

Anatomy of Catastrophic Forgetting: Hidden Representations and Task Semantics [article]

Vinay V. Ramasesh, Ethan Dyer, Maithra Raghu
2020 arXiv   pre-print
In this paper, we address this important knowledge gap, investigating how forgetting affects representations in neural network models.  ...  These insights enable the development of an analytic argument and empirical picture relating the degree of forgetting to representational similarity between tasks.  ...  Additionally, we thank the authors of the image classification library at https://github.com/hysts/ pytorch_image_classification, on top of which we built much of our codebase.  ... 
arXiv:2007.07400v1 fatcat:ppumynt6jjfjtewtpk4fxkecey

Overcoming Catastrophic Forgetting by Generative Regularization [article]

Patrick H. Chen, Wei Wei, Cho-jui Hsieh, Bo Dai
2021 arXiv   pre-print
By combining discriminative and generative loss together, we empirically show that the proposed method outperforms state-of-the-art methods on a variety of tasks, avoiding catastrophic forgetting in continual  ...  In this paper, we propose a new method to overcome catastrophic forgetting by adding generative regularization to Bayesian inference framework.  ...  In this work, f θ (·) is a neural network parameterized by θ.  ... 
arXiv:1912.01238v3 fatcat:hddingjbzbennljz4nnn32lk2m

Understanding the Role of Training Regimes in Continual Learning [article]

Seyed Iman Mirzadeh, Mehrdad Farajtabar, Razvan Pascanu, Hassan Ghasemzadeh
2020 arXiv   pre-print
Catastrophic forgetting affects the training of neural networks, limiting their ability to learn multiple tasks sequentially.  ...  Instead, we hypothesize that the geometrical properties of the local minima found for each task play an important role in the overall degree of forgetting.  ...  Based on the argumentation of the previous section, we believe these techniques can have an important role in affecting forgetting as well.  ... 
arXiv:2006.06958v1 fatcat:kq545vj3brf6nchbxjo3rlwnb4

Towards continual task learning in artificial neural networks: current approaches and insights from neuroscience [article]

David McCaffary
2021 arXiv   pre-print
those of previous tasks in a process termed catastrophic forgetting.  ...  Neural networks trained on multiple tasks in sequence with stochastic gradient descent often suffer from representational interference, whereby the learned weights for a given task effectively overwrite  ...  An empiri- cal investigation of catastrophic forgetting in gradient-based neural networks. arXiv preprint arXiv:1312.6211, 2013.  ... 
arXiv:2112.14146v1 fatcat:xu3a3blkxrhkvmutosnwrlalum

On the Decision Boundary of Deep Neural Networks [article]

Yu Li, Lizhong Ding, Xin Gao
2019 arXiv   pre-print
In an attempt to bridge the gap, we investigate the decision boundary of a production deep learning architecture with weak assumptions on both the training data and the model.  ...  In addition to facilitating the understanding of deep learning, our result can be helpful for solving a broad range of practical problems of deep learning, such as catastrophic forgetting and adversarial  ...  Catastrophic forgetting Catastrophic forgetting [14] , which means the neural network does not have the ability of learning new knowledge without forgetting the learned knowledge, is one of the bottlenecks  ... 
arXiv:1808.05385v3 fatcat:xvne2tvgyzdnllfbh7lhznroni

KASAM: Spline Additive Models for Function Approximation [article]

Heinrich van Deventer, Pieter Janse van Rensburg, Anna Bosman
2022 arXiv   pre-print
Neural networks have been criticised for their inability to perform continual learning due to catastrophic forgetting and rapid unlearning of a past concept when a new concept is introduced.  ...  SAM exhibited robust but imperfect memory retention, with small regions of overlapping interference in sequential learning tasks. KASAM exhibited greater susceptibility to catastrophic forgetting.  ...  with higher dimensional problems; increasing the density of basis functions during training; the incorporation of B-spline functions into other models such as recurrent neural networks, LSTMs, GRUs or  ... 
arXiv:2205.06376v1 fatcat:zerqstbo6rfghdabzplgjss4oi
« Previous Showing results 1 — 15 out of 1,261 results