11,601 Hits in 7.0 sec

A Corrective View of Neural Networks: Representation, Memorization and Learning [article]

Guy Bresler, Dheeraj Nagaraj
2020 arXiv   pre-print
This technique yields several new representation and learning results for neural networks.  ...  Next, we give a powerful representation result for two-layer neural networks with ReLUs and smoothed ReLUs which can achieve a squared error of at most ϵ with O(C(a,d)ϵ^-1/(a+1)) for a ∈N∪{0} when the  ...  In this paper we focus on three core questions regarding the capabilities of neural networks: representation, memorization, and learning low degree polynomials. Representation.  ... 
arXiv:2002.00274v2 fatcat:lmzkkmtxb5hwhpoo7xskxukule

Coded Hopfield networks

Claude Berrou, Vincent Gripon
2010 2010 6th International Symposium on Turbo Codes & Iterative Information Processing  
Error-correcting coding is introduced in associative memories based on Hopfield networks in order to increase the learning diversity as well as the recall robustness in presence of erasures and errors.  ...  Whereas learning is similar to that of classical (i.e.  ...  This article presents a concrete example of formal neural networks combined with error correcting codes.  ... 
doi:10.1109/istc.2010.5613860 fatcat:udiethocubfvjgtsponxmzwxqi

Neural Networks in Seismic Discrimination [chapter]

Farid U. Dowla
1996 Monitoring a Comprehensive Test Ban Treaty  
This network is a classic example of a supervised learning network and applies a learning algorithm, such that the network learns to associate the inputs with the corresponding outputs for all or most  ...  However, the choice of the correct network and appropriate preprocessing is often the key to developing a powerful network.  ...  Acknowledgments The work described in this paper is a result of collaboration with many colleagues at LLNL.  ... 
doi:10.1007/978-94-011-0419-7_41 fatcat:wks7pqbntnedxdt5ac674mgo2q

AL2: Progressive Activation Loss for Learning General Representations in Classification Neural Networks [article]

Majed El Helou, Frederike Dümbgen, Sabine Süsstrunk
2020 arXiv   pre-print
The large capacity of neural networks enables them to learn complex functions.  ...  To avoid overfitting, networks however require a lot of training data that can be expensive and time-consuming to collect.  ...  One approach to assess the quality of the feature representation learned by a network is to evaluate how much it actually memorizes.  ... 
arXiv:2003.03633v1 fatcat:dhm3xeefcrhq3nejk3csvkd2oy

A Closer Look at Memorization in Deep Networks [article]

Devansh Arpit, Stanisław Jastrzębski, Nicolas Ballas, David Krueger, Emmanuel Bengio, Maxinder S. Kanwal, Tegan Maharaj, Asja Fischer, Aaron Courville, Yoshua Bengio, Simon Lacoste-Julien
2017 arXiv   pre-print
We examine the role of memorization in deep learning, drawing connections to capacity, generalization, and adversarial robustness.  ...  While deep networks are capable of memorizing noise data, our results suggest that they tend to prioritize learning simple patterns first.  ...  DA was supported by IVADO, CIFAR and NSERC. EB was financially supported by the Samsung Advanced Institute of Technology (SAIT). MSK and SJ were supported by MILA during the course of this work.  ... 
arXiv:1706.05394v2 fatcat:ltnngvtq2renfcdk76yradnfha

Active Long Term Memory Networks [article]

Tommaso Furlanello, Jiaping Zhao, Andrew M. Saxe, Laurent Itti, Bosco S. Tjan
2016 arXiv   pre-print
A-LTM exploits the non-convex nature of deep neural networks and actively maintains knowledge of previously learned, inactive tasks using a distillation loss.  ...  Continual Learning in artificial neural networks suffers from interference and forgetting when different tasks are learned sequentially.  ...  The case for a strong effect of CI during sequential learning in deep neural networks has been shown, respectively between semantic [17] and graphical factors [3] .  ... 
arXiv:1606.02355v1 fatcat:c4sials7qfbe5fajedk65rgv4i

An Attention-based Recurrent Neural Networks Framework for Health Data Analysis

Qiuling Suo, Fenglong Ma, Giovanni Canino, Jing Gao, Aidong Zhang, Agostino Gnasso, Giuseppe Tradigo, Pierangelo Veltri
2018 Sistemi Evoluti per Basi di Dati  
Patients' historical records are fed into a Recurrent Neural Network (RNN) which memorizes all the past visit information, and then a task-specific layer is trained to predict multiple diagnoses.  ...  We propose a multi-task framework that can monitor the multiple status of diagnoses.  ...  Acknowledgement This work was supported in part by NSF IIS-1218393 and IIS-1514204, and by SISTABENE POR project as PIHGIS POR project.  ... 
dblp:conf/sebd/SuoMCGZGTV18 fatcat:ekmas57zwnaq7mjcrx2hiu62ry

Clinical Information Extraction via Convolutional Neural Network [article]

Peng Li, Heng Huang
2016 arXiv   pre-print
Our approach uses context words and their part-of-speech tags and shape information as features. Then we hire temporal (1D) convolutional neural network to learn hidden feature representations.  ...  We report an implementation of a clinical information extraction tool that leverages deep neural network to annotate event spans and their attributes from raw clinical notes and pathology reports.  ...  We then hire temporal convolution neural network to learn hidden feature representations.  ... 
arXiv:1603.09381v1 fatcat:5prewboffbhyjel3jmrqjpsk6y

Neural Networks in Building QSAR Models [chapter]

Igor I. Baskin, Vladimir A. Palyulin, Nikolai S. Zefirov
2006 Msphere  
, the learning dynamics, regularization, and the use of neural network ensembles.  ...  The highlighted topics cover the approximating ability of ANNs, the interpretability of the resulting models, the issues of generalization and memorization, the problems of overfitting and overtraining  ...  of view [ 82 ] .  ... 
doi:10.1007/978-1-60327-101-1_8 fatcat:zuqjzmnpsrgsnpc5vb77cirg3q

SHAMANN: Shared Memory Augmented Neural Networks [chapter]

Cosmin I. Bercea, Olivier Pauly, Andreas Maier, Florin C. Ghesu
2019 Lecture Notes in Computer Science  
Current state-of-the-art methods for semantic segmentation use deep neural networks to learn the segmentation mask from the input image signal as an imageto-image mapping.  ...  We propose shared memory augmented neural network actors as a dynamically scalable alternative.  ...  Since in this case the location of the missing data is not deterministic, the networks have to adaptively learn a more complex strategy for the memorization and lookup of information to better extrapolate  ... 
doi:10.1007/978-3-030-20351-1_65 fatcat:ljh3qhdvwbemngr77nkidmbqoq

Neural Networks [article]

Heinz Horner, Reimer Kuehn
1997 arXiv   pre-print
statistics and machine learning theory.  ...  We review the theory of neural networks, as it has emerged in the last ten years or so within the physics community, emphasizing questions of biological relevance over those of importance in mathematical  ...  Learning and Generalization Given that we interpret the firing patterns of a neural network as representing information, neural dynamics must be regarded as a form of information processing.  ... 
arXiv:cond-mat/9705270v1 fatcat:a6o6tndg5fhffpyjxzfidpiyl4

Network Learning and Training of a Cascaded Link-Based Feed Forward Neural Network (CLBFFNN) in an Intelligent Trimodal Biometric System

Benson-Emenike Mercy
2018 Figshare  
In this paper, CLBFFNN is presented as a special and intelligent form of artificial neural networks that has the capability to adapt to training and learning of new ideas and be able to give decisions  ...  It gives an overview of neural networks.  ...  ACKNOWLEDGEMENTS The authors would like to appreciate the reviewers of this paper, for their useful comments and contributions which added to the quality of this work.  ... 
doi:10.6084/m9.figshare.7442018 fatcat:b7thzwt2ijfx3f6qxdzufgm4be

Non-destructive test by the Hopfield network

S. Barcherini, L. Cipiccia, M. Maggi, S. Fiori, P. Burrascano
2000 Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks. IJCNN 2000. Neural Computing: New Challenges and Perspectives for the New Millennium  
The aim of this work is to propose and discuss a technique which allows for classifying the defects found in metallic components on the basis of a non-destructive Remote-Field Eddy-Current Technique experimental  ...  To this aim, we propose to employ a Hopfield associative memory as a neural classifier. The performances of the proposed approach are evaluated on real-world data.  ...  The surface represents the memory (neural network) energy E(S), where S = S(t) is the current network state and the m minima are stable states of energy; a neural network which implements an associative  ... 
doi:10.1109/ijcnn.2000.859425 dblp:conf/ijcnn/BarcheriniCMFB00 fatcat:fv3vp7f6evemdmyz5jkwaun4z4

Hinted Networks [article]

Joel Lamy-Poirier, Anqi Xu
2018 arXiv   pre-print
We present Hinted Networks: a collection of architectural transformations for improving the accuracies of neural network models for regression tasks, through the injection of a prior for the output prediction  ...  We further assess the range of accuracy gains within an aerial-view localization setup, simulated across vast areas at different times of the year.  ...  We believe that networks fail to learn that clouds are irrelevant to the pose, and instead memorize cloud locations from the training set.  ... 
arXiv:1812.06297v1 fatcat:d426chvwqzbsvjqadurtxz5zju

Differentiable plasticity: training plastic neural networks with backpropagation [article]

Thomas Miconi, Jeff Clune, Kenneth O. Stanley
2018 arXiv   pre-print
First, recurrent plastic networks with more than two million parameters can be trained to memorize and reconstruct sets of novel, high-dimensional 1000+ pixels natural images not seen during training.  ...  Finally, in reinforcement learning settings, plastic networks outperform a non-plastic equivalent in a maze exploration task.  ...  We thank Juergen Schmidhuber and Yoshua Bengio for helpful references to previous work in plastic and self-modifying networks.  ... 
arXiv:1804.02464v3 fatcat:bqrsqqnesbhmdmqplqa77yh4qm
« Previous Showing results 1 — 15 out of 11,601 results