Filters








3,778 Hits in 11.3 sec

Unsupervised learning to overcome catastrophic forgetting in neural networks

Irene Munoz-Martin, Stefano Bianchi, Giacomo Pedretti, Octavian Melnic, Stefano Ambrogio, Daniele Ielmini
2019 IEEE Journal on Exploratory Solid-State Computational Devices and Circuits  
achievable with state-of-the-art neural networks.  ...  Unsupervised learning by STDP is demonstrated by hardware experiments with a one-layer perceptron adopting phasechange memory (PCM) synapses.  ...  Fig. 1 illustrates the catastrophic forgetting problem in neural networks: first, the network is trained by supervised training with task A (a), e.g., a subset of a large data set such as the MNIST data  ... 
doi:10.1109/jxcdc.2019.2911135 fatcat:jmog3mmwqvg7dp7ilolrvdkrwm

Training a Functional Link Neural Network Using an Artificial Bee Colony for Solving a Classification Problems [article]

Yana Mazwin Mohmad Hassim, Rozaida Ghazali
2012 arXiv   pre-print
This paper presents the ability of Functional Link Neural Network (FLNN) to overcome the complexity structure of MLP by using single layer architecture and propose an Artificial Bee Colony (ABC) optimization  ...  However due to the complexity of MLP structure and also problems such as local minima trapping, over fitting and weight interference have made neural network training difficult.  ...  overcome the disadvantages caused by backpropagation in the FLNN training.  ... 
arXiv:1212.6922v1 fatcat:wrfzexwdtreprnspqc2yigkwni

Meta Continual Learning [article]

Risto Vuorio, Dong-Yeon Cho, Daejoong Kim, Jiwon Kim
2018 arXiv   pre-print
This ability is limited in the current deep neural networks by a problem called catastrophic forgetting, where training on new tasks tends to severely degrade performance on previous tasks.  ...  Using neural networks in practical settings would benefit from the ability of the networks to learn new tasks throughout their lifetimes without forgetting the previous tasks.  ...  The update step predictor is another fully connected neural network with two layers of ten neurons.  ... 
arXiv:1806.06928v1 fatcat:gr2lrveltjc5dpt5qpdwihihsq

Continuous Learning in a Single-Incremental-Task Scenario with Spike Features [article]

Ruthvik Vaila, John Chiasson, Vishal Saxena
2020 arXiv   pre-print
the same DNN is trained on the next task it forgets the first task.  ...  Deep Neural Networks (DNNs) have two key deficiencies, their dependence on high precision computing and their inability to perform sequential learning, that is, when a DNN is trained on a first task and  ...  Figure 3 shows the trend of testing accuracy as the network is trained on disjoint tasks with 10 different weight initializations.  ... 
arXiv:2005.04167v1 fatcat:4fouiybdmzgctgzrwyyzrdbyim

INCIDENT DETECTION USING A FUZZY-BASED NEURAL NETWORK MODEL

Daehyon KIM, Seungjae LEE
2005 Journal of the Eastern Asia Society for Transportation Studies  
Incidents on the freeway disrupt traffic flow and the cost of delay caused by the incidents is significant.  ...  system claimed to be more powerful than many expert systems, genetic algorithms, or other neural network models like Backpropagation.  ...  Table 3 shows the performance in terms of prediction accuracy of two different neural network models on the incident and incident-free test data sets.  ... 
doi:10.11175/easts.6.2629 fatcat:p434x7p3jbalrjvoskwwpgeira

Attention-Based Structural-Plasticity [article]

Soheil Kolouri, Nicholas Ketz, Xinyun Zou, Jeffrey Krichmar, Praveen Pilly
2019 arXiv   pre-print
Neural networks, in particular, suffer plenty from the catastrophic forgetting phenomenon. Recently there has been several efforts towards overcoming catastrophic forgetting in neural networks.  ...  Specifically, we define an attention-based selective plasticity of synapses based on the cholinergic neuromodulatory system in the brain.  ...  Connections in the neural network are committed to a given task based on contrastive excitation backpropagation (c-EB).  ... 
arXiv:1903.06070v1 fatcat:hhlw2mi44zcijprvuetnvnobr4

Ensemble Learning in Fixed Expansion Layer Networks for Mitigating Catastrophic Forgetting

Robert Coop, Aaron Mishtal, Itamar Arel
2013 IEEE Transactions on Neural Networks and Learning Systems  
A variation of this phenomenon, in the context of feedforward neural networks, arises when nonstationary inputs lead to loss of previously learned mappings.  ...  In addition, we investigate a novel framework for training ensembles of FEL networks, based on exploiting an information-theoretic measure of diversity between FEL learners, to further control undesired  ...  A plot of the accuracy obtained using an ensemble of FEL neural networks is shown in Fig. 7 , with some detailed values shown in Table IV .  ... 
doi:10.1109/tnnls.2013.2264952 pmid:24808599 fatcat:rv4zxa3w6zedjm74qtrrr4hwwy

Synaptic metaplasticity in binarized neural networks [article]

Axel Laborieux, Maxence Ernoult, Tifenn Hirtzlin, Damien Querlioz
2021 arXiv   pre-print
Unlike the brain, artificial neural networks, including state-of-the-art deep neural networks for computer vision, are subject to "catastrophic forgetting": they rapidly forget the previous task when trained  ...  In this work, we show that this concept of metaplasticity can be transferred to a particular type of deep neural networks, binarized neural networks, to reduce catastrophic forgetting.  ...  Acknowledgement This work was supported by European Research Council Starting Grant NANOINFER (reference: 715872).  ... 
arXiv:2101.07592v1 fatcat:vbr7y5fpi5ed7lpxplyhe5xuhi

Natural Language Processing with Improved Deep Learning Neural Networks

YiTao Zhou, Rahman Ali
2022 Scientific Programming  
to train a recursive neural network classifier optimized by sentences.  ...  After the feature extractor is pretrained, we use a long short-term memory neural network as a classifier of the transfer action, and the characteristics extracted by the syntactic analyzer as its input  ...  In 1989, the backpropagation algorithm was successfully applied to the training of a convolutional neural network.  ... 
doi:10.1155/2022/6028693 fatcat:6cyjzepiajfpbnw63jo6vyldga

Continuous Learning in a Single-Incremental-Task Scenario with Spike Features

Ruthvik Vaila, John Chiasson, Vishal Saxena
2020 International Conference on Neuromorphic Systems 2020  
perform sequential learning, that is, when a DNN is trained on a first task and the same DNN is trained on the next task it forgets the first task.  ...  The final, definitive version of this document can be found online at ABSTRACT Deep Neural Networks (DNNs) have two key deficiencies, their dependence on high precision computing and their inability to  ...  Figure 3 shows the trend of testing accuracy as the network is trained on disjoint tasks with 10 different weight initializations.  ... 
doi:10.1145/3407197.3407213 dblp:conf/icons2/VailaCS20 fatcat:tidyw4mg5zbjzgmuo77f6d7whi

Mitigation of catastrophic forgetting in recurrent neural networks using a Fixed Expansion Layer

Robert Coop, Itamar Arel
2013 The 2013 International Joint Conference on Neural Networks (IJCNN)  
in the context of dynamic systems, particularly recurrent neural networks.  ...  In this paper, we introduce a solution for mitigating catastrophic forgetting in RNNs based on enhancing the Fixed Expansion Layer (FEL) neural network which exploits sparse coding of hidden neuron activations  ...  This paper addresses the issue of mitigating catastrophic forgetting in recurrent neural networks by expanding on prior work which was devised for feedforward architectures [5] .  ... 
doi:10.1109/ijcnn.2013.6707047 dblp:conf/ijcnn/CoopA13 fatcat:re6wkzuy2jgwlbgr2s5c4o5tru

A Web-Based User Interface for Machine Learning Analysis [chapter]

Fatma Nasoz, Chandani Shrestha
2017 Lecture Notes in Computer Science  
When tested on breast cancer data with 10 attributes both Logistic Regression and Backpropagation gave 98.5% accuracy.  ...  And when tested on breast cancer data with 31 attributes Logistic Regression gave 92.85% accuracy and Backpropagation gave 94.64%.  ...  Chandani Shrestha University of Nevada, Las Vegas May 2016 Acknowledgements iv  ... 
doi:10.1007/978-3-319-58524-6_35 fatcat:gaadaq43wndcte54q4toj3hv4u

Remembering for the Right Reasons: Explanations Reduce Catastrophic Forgetting [article]

Sayna Ebrahimi, Suzanne Petryk, Akash Gokul, William Gan, Joseph E. Gonzalez, Marcus Rohrbach, Trevor Darrell
2021 arXiv   pre-print
example in the buffer and ensures the model has "the right reasons" for its predictions by encouraging its explanations to remain consistent with those used to make decisions at training time.  ...  The goal of continual learning (CL) is to learn a sequence of tasks without suffering from the phenomenon of catastrophic forgetting.  ...  Because later layers in a convolutional neural network are known to encode higher-level semantics, taking the gradient of a model output with respect to the activations of these feature maps discovers  ... 
arXiv:2010.01528v2 fatcat:kro22ndjozbtdbfgilpnog3xyq

American Sign Language Recognition using Deep Learning and Computer Vision

Kshitij Bantupalli, Ying Xie
2018 2018 IEEE International Conference on Big Data (Big Data)  
operation such as backpropagation which are key features which enable a neural network to perform classification with a high accuracy like it does.  ...  In theory, CNN "learns" the values of the filters during the training process thanks to backpropagation of gradients and loss between the perceptron's of the neural network.  ... 
doi:10.1109/bigdata.2018.8622141 dblp:conf/bigdataconf/BantupalliX18 fatcat:qbh4hahgunhltihzpvofpngwge

Neuromodulated Dopamine Plastic Networks for Heterogeneous Transfer Learning with Hebbian Principle

Arjun Magotra, Juntae Kim
2021 Symmetry  
Neuromodulation of plasticity offers a powerful new technique with applications in training neural networks implementing asymmetric backpropagation using Hebbian principles in transfer learning motivated  ...  The artificial neural networks with neuromodulated plasticity are used to implement transfer learning in the image classification domain.  ...  In NDHTL, the neural network layers output controls eta-the plastic nature of the connection weights, which controls the transfer learning efficacy and accuracy on the model altogether.  ... 
doi:10.3390/sym13081344 fatcat:mj4nxsut6vgojd266qfuuhfrlu
« Previous Showing results 1 — 15 out of 3,778 results