Filters








483 Hits in 2.5 sec

Building Efficient Deep Hebbian Networks for Image Classification Tasks [chapter]

Yanis Bahroun, Eugénie Hunsicker, Andrea Soltoggio
2017 Lecture Notes in Computer Science  
This work introduces the Deep Hebbian Network (DHN), which combines the advantages of sparse coding, dimensionality reduction, and convolutional neural networks for learning features from images.  ...  Moreover, the DHN model can be trained online due to its Hebbian components. Different configurations of the DHN have been tested on scene and image classification tasks.  ...  The introduction of the DPL is inspired by the work of [9] , which showed that visual spatial pooling can be learned by Principal Components analysis (PCA) based techniques, reproducing the tuning properties  ... 
doi:10.1007/978-3-319-68600-4_42 fatcat:it2ezboj2va6fpij2wplg3fkxy

Hebbian Semi-Supervised Learning in a Sample Efficiency Setting [article]

Gabriele Lagani, Fabrizio Falchi, Claudio Gennaro, Giuseppe Amato
2021 arXiv   pre-print
We propose to address the issue of sample efficiency, in Deep Convolutional Neural Networks (DCNN), with a semisupervised training strategy that combines Hebbian learning with gradient descent: all internal  ...  with end-to-end supervised backpropagation training.  ...  learning for Principal Component Analysis (PCA) [11, 18, 19, 20] , Hebbian/anti-Hebbian learning [21, 22] .  ... 
arXiv:2103.09002v1 fatcat:rba5wzatynanrd5nuf73kevov4

Meta-Learning through Hebbian Plasticity in Random Networks [article]

Elias Najarro, Sebastian Risi
2022 arXiv   pre-print
Inspired by this biological mechanism, we propose a search method that, instead of optimizing the weight parameters of neural networks directly, only searches for synapse-specific Hebbian learning rules  ...  that allow the network to continuously self-organize its weights during the lifetime of the agent.  ...  principal component analysis (PCA) which projects the high-dimensional space where the network weights live to a 3-dimensional representation at each timestep such that most of the variance is best explained  ... 
arXiv:2007.02686v5 fatcat:6bwktudlu5acllvixw52q3ztju

The combination of Hebbian and predictive plasticity learns invariant object representations in deep sensory networks [article]

Manu Srinath Halvagal, Friedemann Zenke
2022 bioRxiv   pre-print
Our brains accomplish this feat by forming meaningful internal representations in deep sensory networks with plastic synaptic connections.  ...  We show that our plasticity model yields disentangled object representations in deep neural networks without the need for supervision and implausible negative examples.  ...  Learning in deep convolutional neural networks For all network simulations, we used a convolutional DNN based on the VGG-11 architecture [82] (see Supplementary Material S4 for details).  ... 
doi:10.1101/2022.03.17.484712 fatcat:amlopvlvi5cadkvc2nx4b663xu

Combining Multiple Modes of Information Using Unsupervised Neural Classifiers [chapter]

Khurshid Ahmad, Matthew Casey, Bogdan Vrusias, Panagiotis Saragiotis
2003 Lecture Notes in Computer Science  
Lawrence et al have compared the performance of their multi-net system with another combined classifier comprising a principal component analysis (PCA) system and multi-layer perceptron (MLP).  ...  A modular neural network-based system is presented where the component networks learn together to classify a set of complex input patterns.  ...  The authors would also like to thank the UK Police Training College at Hendon for supplying the scene-of-crime images and Mr C. Handy for transcribing the image collateral text.  ... 
doi:10.1007/3-540-44938-8_24 fatcat:t52flxi6uffqzh4p2uik7unywm

Optimal unsupervised learning in a single-layer linear feedforward neural network

Terence D. Sanger
1989 Neural Networks  
It is shown that the algorithm is closely related to algorithms in statistics (Factor Analysis and Principal Components Analysis) and neural networks (Self-supervised Backpropagation, or the "encoder"  ...  An implementation which can train neural networks using only local "synaptic" modification rules is described.  ...  The relation to statistical procedures such as Factor Analysis or Principal Components Analysis makes clear the fundamental nature of the theory.  ... 
doi:10.1016/0893-6080(89)90044-0 fatcat:bkzbcyheoba55az2ssxbik3hmm

ReStoCNet: Residual Stochastic Binary Convolutional Spiking Neural Network for Memory-Efficient Neuromorphic Computing [article]

Gopalakrishnan Srinivasan, Kaushik Roy
2019 arXiv   pre-print
In this work, we propose ReStoCNet, a residual stochastic multilayer convolutional Spiking Neural Network (SNN) composed of binary kernels, to reduce the synaptic memory footprint and enhance the computational  ...  We propose Spike Timing Dependent Plasticity (STDP) based probabilistic learning algorithm, referred to as Hybrid-STDP (HB-STDP), incorporating Hebbian and anti-Hebbian learning mechanisms, to train the  ...  In order to visualize the efficiency of unsupervised clustering offered by ReStoCNet-3, we reduce the dimension of the pooled spiking activations of the convolutional layers using Principal Component Analysis  ... 
arXiv:1902.04161v1 fatcat:y4hutkwm35ambbmiqepg55bdqa

ReStoCNet: Residual Stochastic Binary Convolutional Spiking Neural Network for Memory-Efficient Neuromorphic Computing

Gopalakrishnan Srinivasan, Kaushik Roy
2019 Frontiers in Neuroscience  
In this work, we propose ReStoCNet, a residual stochastic multilayer convolutional Spiking Neural Network (SNN) composed of binary kernels, to reduce the synaptic memory footprint and enhance the computational  ...  We propose Spike Timing Dependent Plasticity (STDP) based probabilistic learning algorithm, referred to as Hybrid-STDP (HB-STDP), incorporating Hebbian and anti-Hebbian learning mechanisms, to train the  ...  All authors helped with developing the concepts, conceiving the experiments, and writing the paper.  ... 
doi:10.3389/fnins.2019.00189 pmid:30941003 pmcid:PMC6434391 fatcat:qw6yrwi2nrcr5clhjkoabqdrji

On the Origin of Deep Learning [article]

Haohan Wang, Bhiksha Raj
2017 arXiv   pre-print
It covers from the genesis of neural networks when associationism modeling of the brain is studied, to the models that dominate the last decade of research in deep learning like convolutional neural networks  ...  , deep belief networks, and recurrent neural networks.  ...  Acknowledgements Thanks to the demo from http://beej.us/blog/data/convolution-image-processing/ for a quick generation of examples in Figure 16 .  ... 
arXiv:1702.07800v4 fatcat:7xgi64g665bmvboi2assordwue

Neuroscience-inspired online unsupervised learning algorithms [article]

Cengiz Pehlevan, Dmitri B. Chklovskii
2019 arXiv   pre-print
Motivated by this and biological implausibility of deep learning networks, we developed a family of biologically plausible artificial neural networks (NNs) for unsupervised learning.  ...  Gradient-based online optimization of such similarity-based objective functions can be implemented by NNs with biologically plausible local learning rules.  ...  He proposed to model a neuron by an online Principal Component Analysis (PCA) algorithm.  ... 
arXiv:1908.01867v2 fatcat:6eih5oqttjgvnaycqs5uc2laje

Classifying Non-Small Cell Lung Cancer Histopathology Types and Transcriptomic Subtypes using Convolutional Neural Networks [article]

Kun-Hsing Yu, Feiran Wang, Gerald J. Berry, Christopher Re, Russ B. Altman, Michael Snyder, Isaac S. Kohane
2019 bioRxiv   pre-print
To establish neural networks for quantitative image analyses, we first build convolutional neural network models to identify tumor regions from adjacent dense benign tissues (areas under the receiver operating  ...  However, the morphological patterns associated with the molecular subtypes have not been systematically studied.  ...  The resulting image tiles were rescaled for analysis by convolutional neural networks.  ... 
doi:10.1101/530360 fatcat:z42wdolvc5g4flbx6hii6r42ta

What computational model provides the best explanation of face representations in the primate brain? [article]

Le Chang, Bernhard Egger, Thomas Vetter, Doris Tsao
2020 bioRxiv   pre-print
Surprisingly, deep neural networks trained specifically on facial identification did not explain neural responses well.  ...  We found that the active appearance model better explained responses than any other model except CORnet-Z, a feedforward deep neural network trained on general object classification to classify non-face  ...  We are grateful to Nicole Schweers for help with animal training and MingPo Yang for help with implementing the CORnets.  ... 
doi:10.1101/2020.06.07.111930 fatcat:kasnxsxm55dn7c4klearlriqa4

Contextual Integration in Cortical and Convolutional Neural Networks

Ramakrishnan Iyer, Brian Hu, Stefan Mihalas
2020 Frontiers in Computational Neuroscience  
We incorporate lateral connections learned using this model into convolutional neural networks.  ...  Our framework can potentially be applied to networks trained on other tasks, with the learned lateral connections aiding computations implemented by feedforward connections when the input is unreliable  ...  We incorporate lateral connections learned using this model into convolutional neural networks.  ... 
doi:10.3389/fncom.2020.00031 pmid:32390818 pmcid:PMC7192314 fatcat:w73zjz7invbulaehk6ck47kd3u

ARTFLOW: A Fast, Biologically Inspired Neural Network That Learns Optic Flow Templates for Self-Motion Estimation

Oliver W. Layton
2021 Sensors  
ARTFLOW trains substantially faster and yields self-motion estimates that are far more accurate than a comparable network that relies on Hebbian learning.  ...  Simulations show that the network is capable of learning stable patterns from optic flow simulating self-motion through environments of varying complexity with only one epoch of training.  ...  The performance of the ARTFLOW templates was compared with that of a principal component analysis (PCA)-based representation.  ... 
doi:10.3390/s21248217 pmid:34960310 pmcid:PMC8708706 fatcat:ittb3rt7wndofoaz3rskjy4riq

Large-Scale Gradient-Free Deep Learning with Recursive Local Representation Alignment [article]

Alexander Ororbia, Ankur Mali, Daniel Kifer, C. Lee Giles
2020 arXiv   pre-print
Training deep neural networks on large-scale datasets requires significant hardware resources whose costs (even on cloud platforms) put them out of reach of smaller organizations, groups, and individuals  ...  In this paper, we propose a gradient-free learning procedure, recursive local representation alignment, for training large-scale neural architectures.  ...  (Middle) Generalization performance analysis of networks trained on CIFAR-10 and ImageNet. (Right) CIFAR-10 skip-gap analysis (measuring validation error as a function of gap g).  ... 
arXiv:2002.03911v3 fatcat:2k326rdnnjhutfmthzqrvrsxui
« Previous Showing results 1 — 15 out of 483 results