Filters








2,343 Hits in 1.9 sec

Characterizing emergent representations in a space of candidate learning rules for deep networks

Yinan Cao, Christopher Summerfield, Andrew Saxe
2020 Neural Information Processing Systems  
We show that this space contains five important candidate learning algorithms as specific points-Gradient Descent, Contrastive Hebbian, quasi-Predictive Coding, Hebbian & Anti-Hebbian.  ...  Studies suggesting that representations in deep networks resemble those in biological brains have mostly relied on one specific learning rule: gradient descent, the workhorse behind modern deep learning  ...  The feasible set of algorithms includes gradient descent, but also extends to CHL and modestly strong quasi-Predictive Coding, Hebbian, and Anti-Hebbian learning.  ... 
dblp:conf/nips/CaoSS20 fatcat:po7v3k43dvh3nmlavs54bmg2cm

Competitive Anti-Hebbian Learning of Invariants

Nicol N. Schraudolph, Terrence J. Sejnowski
1991 Neural Information Processing Systems  
The prediction paradigm is often used to reconcile this dichotomy; here we suggest a more direct approach to invariant learning based on an anti-Hebbian learning rule.  ...  y2 (Oja and Karhunen, 1985) : 8 1 2 8 dWi ex: ---y = y-;:;-y = XiY 8wi 2 UWi (2) As seen above, it is not sufficient for an anti-Hebbian neuron to simply perform gradient descent in the same function.  ...  Unfortunately, the pole at y = 0 presents a severe problem for Simple gradient descent methods: the near-infinite derivatives in its vicinity lead to catastrophically large step sizes.  ... 
dblp:conf/nips/SchraudolphS91 fatcat:svszbxbhlna5jgf64rawrsm5qy

Fast Parametric Learning with Activation Memorization [article]

Jack W Rae, Chris Dyer, Peter Dayan, Timothy P Lillicrap
2018 arXiv   pre-print
descent.  ...  Naturally when λ = 0 this is gradient descent, and so we see Hebbian Softmax is mixture of the two learning rules. All remaining parameters in the model are optimized with gradient descent as usual.  ... 
arXiv:1803.10049v1 fatcat:a34z5nzmpbhudiffrpiev5vjom

Optical multilayer neural networks

Demetri Psaltis, Yong Qiao, Bahram Javidi
1991 Optical Information Processing Systems and Architectures III  
It is, however, still an energy descent rule. Using Eqs.  ...  The learning rule for the second layer can just follow gradient descent since it is already a local rule. This new local learning rule is obviously not a gradient descent rule any more.  ... 
doi:10.1117/12.49735 fatcat:h65znucm6nhpblcrxoqchya4ny

A Pulse-Gated, Predictive Neural Circuit [article]

Yuxiu Shao, Andrew T. Sornborger, Louis Tao
2017 arXiv   pre-print
Here, we demonstrate how Hebbian plasticity may be used to supplement our pulse-gated information processing framework by implementing a machine learning algorithm.  ...  Using only pulse-gating, Hebbian learning and standard neuronal synaptic properties, we implemented a short-term memory, a gradient descent algorithm, a long-term memory and a method for computing an inner  ...  Here, for simplicity, we have discarded the and just write Γ. 2) Gradient Descent: To implement the gradient descent part of the algorithm, we implement (6) using pulse gating.  ... 
arXiv:1703.05406v1 fatcat:mv5i6l22kfgt5azzigar65cvuq

Page 1019 of Neural Computation Vol. 8, Issue 5 [page]

1996 Neural Computation  
The “Brain-State-in-a-Box” neural model is a gradient descent algorithm. J. Math. Psychol. 30, 73-80. Golden, R. M. 1993.  ...  Derivation of linear Hebbian equations from a nonlinear Hebbian model of synaptic plasticity. Neural Comp. 2, 321-333. Miller, K. D. 1996.  ... 

Structured and Deep Similarity Matching via Structured and Deep Hebbian Networks [article]

Dina Obeid, Hugo Ramambason, Cengiz Pehlevan
2019 arXiv   pre-print
These networks extend F\"oldiak's Hebbian/Anti-Hebbian network to deep architectures and structured feedforward, lateral and feedback connections.  ...  Some other differences are: 1) CHL performs approximate gradient-descent.  ...  In the second step of the algorithm, synaptic weights are updated by gradient descent-ascent.  ... 
arXiv:1910.04958v2 fatcat:h4f7pwoeffcr7lz3i6xdmbvhlu

Analysis of Hopfield Associative Memory with Combination of MC Adaptation Rule and an Evolutionary Algorithm

Amit Singh, Somesh Kumar, T P Singh
2013 International Journal of Computer Applications  
Most training algorithms, such as Back Propagation (BP) and conjugate gradient algorithms [9] are based on gradient descent.  ...  There have been some successful applications of BP in various areas [10] , but BP has drawbacks due to its use of gradient descent.  ... 
doi:10.5120/13536-1275 fatcat:73zzhu44czfftnfttk3bazwthm

Learning to learn with backpropagation of Hebbian plasticity [article]

Thomas Miconi
2016 arXiv   pre-print
While recent methods can endow neural networks with long-term memories, Hebbian plasticity is currently not amenable to gradient descent.  ...  Hebbian plasticity is a powerful principle that allows biological brains to learn from their lifetime experience.  ...  Conclusions and future work In this paper we have introduced a method for designing networks endowed with Hebbian plasticity through gradient descent and backpropagation.  ... 
arXiv:1609.02228v2 fatcat:puyhnd7f3rdxxb7kecrjjznmn4

Testing the Genomic Bottleneck Hypothesis in Hebbian Meta-Learning [article]

Rasmus Berg Palm, Elias Najarro, Sebastian Risi
2021 arXiv   pre-print
We test this hypothesis by decoupling the number of Hebbian learning rules from the number of synapses and systematically varying the number of Hebbian learning rules.  ...  Hebbian meta-learning has recently shown promise to solve hard reinforcement learning problems, allowing agents to adapt to some degree to changes in the environment.  ...  However, in contrast to the evolving Hebbian learning rules approach in Najarro and Risi (2020) , the gradient descent approach (Miconi et al., 2018) was so far restricted to only evolving a single  ... 
arXiv:2011.06811v2 fatcat:2ys4ez65zzdr7a543ptcrakh6m

Neuromodulated Dopamine Plastic Networks for Heterogeneous Transfer Learning with Hebbian Principle

Arjun Magotra, Juntae Kim
2021 Symmetry  
Dopamine Hebbian Transfer Learning).  ...  Using distinctive learning principles such as dopamine Hebbian learning in transfer learning for asymmetric gradient weights update is a novel approach.  ...  Neuromodulated plasticity can be applied to train artificial neural networks with gradient descent.  ... 
doi:10.3390/sym13081344 fatcat:mj4nxsut6vgojd266qfuuhfrlu

Beyond gradients: Noise correlations control Hebbian plasticity to shape credit assignment [article]

Daniel Nelson Scott, Michael J Frank
2021 bioRxiv   pre-print
In artificial networks, credit assignment is typically governed by gradient descent. Biological learning is thus often analyzed as a means to approximate gradients.  ...  The update based on g would be used by gradient descent, whereas the update based on d is used by our reward-modulated Hebbian rules. (C) Elaboration on noise.  ...  Eligibility for gradient descent is defined by activity and can occur across any combination of features. Eligibility in the modulated Hebbian case can be limited by noise dependence.  ... 
doi:10.1101/2021.11.19.466943 fatcat:6wg47tkf7fffvexo63wm577acy

A Hebbian/Anti-Hebbian network for online sparse dictionary learning derived from symmetric matrix factorization

Tao Hu, Cengiz Pehlevan, Dmitri B. Chklovskii
2014 2014 48th Asilomar Conference on Signals, Systems and Computers  
Connection weights are updated using Hebbian and anti-Hebbian learning rules correspondingly.  ...  and anti-Hebbian learning rules.  ... 
doi:10.1109/acssc.2014.7094519 dblp:conf/acssc/HuPC14 fatcat:tefuda6dzvcgbe5hin6innh4fq

Fundamentals of Artificial Neural Networks

Mohamad H. Hassoun, Nathan Intrator, Susan McKay, Wolfgang Christian
1996 Computers in physics  
Rule 92 3.3.4 Linsker's Rule 95 3.3.5 Hebbian Learning in a Network Setting: Principal-Component Analysis (PCA) 97 3.3.6 Nonlinear PCA 101 3.4 Competitive Learning 103 3.4.1 Simple Competitive  ...  Learning to Stochastic Units 87 3.2 Reinforcement Learning 88 3.2.1 Associative Reward-Penalty Reinforcement Learning Rule 89 3.3 Unsupervised Learning 90 3.3.1 Hebbian Learning 90 3.3.2 Oja'sRule  ... 
doi:10.1063/1.4822376 fatcat:oz3focb4lzbxhba2gghoy332xu

Why Do Similarity Matching Objectives Lead to Hebbian/Anti-Hebbian Networks?

Cengiz Pehlevan, Anirvan M. Sengupta, Dmitri B. Chklovskii
2018 Neural Computation  
Modeling self-organization of neural networks for unsupervised learning using Hebbian and anti-Hebbian plasticity has a long history in neuroscience.  ...  We formalize the long-standing intuition of the rivalry between Hebbian and anti-Hebbian rules by formulating a min-max optimization problem.  ...  Updates to synaptic weights, Eq. (35), are local, Hebbian/anti-Hebbian plasticity rules.  ... 
doi:10.1162/neco_a_01018 pmid:28957017 fatcat:fwlyd7t625ctneyprzoz6yarhe
« Previous Showing results 1 — 15 out of 2,343 results