Filters








1,071 Hits in 5.3 sec

Feedforward Initialization for Fast Inference of Deep Generative Networks is biologically plausible [article]

Yoshua Bengio, Benjamin Scellier, Olexa Bilaniuk, Joao Sacramento and Walter Senn
2016 arXiv   pre-print
It means that after the feedforward initialization, the recurrent network is very close to a fixed point of the network dynamics, where the energy gradient is 0.  ...  We find conditions under which a simple feedforward computation is a very good initialization for inference, after the input units are clamped to observed values.  ...  Acknowledgments The authors would like to thank Tong Che, Vincent Dumoulin, Kumar Krishna Agarwal for feedback and discussions, as well as NSERC, CIFAR, Samsung and Canada Research Chairs for funding.  ... 
arXiv:1606.01651v2 fatcat:sio34lto4bcaxay3n6d2vccoyy

Towards Biologically Plausible Deep Learning [article]

Yoshua Bengio, Dong-Hyun Lee, Jorg Bornschein, Thomas Mesnard and Zhouhan Lin
2016 arXiv   pre-print
We explore more biologically plausible versions of deep representation learning, focusing here mostly on unsupervised learning but developing a learning mechanism that could account for supervised, unsupervised  ...  Another contribution of this paper is that the gradients required for updating the hidden states in the above variational interpretation can be estimated using an approximation that only requires propagating  ...  Compute Canada for computing resources.  ... 
arXiv:1502.04156v3 fatcat:ilgfbdil6zg6fb6cq3bxocvqra

Learning with hidden variables [article]

Yasser Roudi, Graham Taylor
2015 arXiv   pre-print
Learning and inferring features that generate sensory input is a task continuously performed by cortex.  ...  These networks usually involve deep architectures with many layers of hidden neurons.  ...  Together, they form the generative model. The DBN also contains a set of separate, bottom-up connections for fast approximate inference.  ... 
arXiv:1506.00354v2 fatcat:h6lg63puqvghrmpsusik75vrra

Constrained Parameter Inference as a Principle for Learning [article]

Nasir Ahmad, Ellen Schrader, Marcel van Gerven
2022 arXiv   pre-print
We show that COPI is not only more biologically plausible but also provides distinct advantages for fast learning when compared to BP.  ...  Learning in biological and artificial neural networks is often framed as a problem in which targeted error signals are used to directly guide parameter updating for more optimal network behaviour.  ...  Concluding, we argue that constrained parameter inference is not only a prime candidate to explore for biologically plausible learning but can also be treated as a plugin replacement of backpropagation  ... 
arXiv:2203.13203v4 fatcat:u7p253uzajfxdm2rcarqfp4hgu

Overcoming the Weight Transport Problem via Spike-Timing-Dependent Weight Inference [article]

Nasir Ahmad, Luca Ambrogioni, Marcel A. J. van Gerven
2021 arXiv   pre-print
We show that the use of spike timing alone outcompetes existing biologically plausible methods for synaptic weight inference in spiking neural network models.  ...  These features, together with its biological plausibility, make it an attractive mechanism underlying weight inference at single synapses.  ...  Though such an error distribution process is biologically plausible, the effectiveness of the approach is limited to shallow networks and the accuracy of deep networks appears to suffer severely under  ... 
arXiv:2003.03988v4 fatcat:6ijgi7ctyvdqrdwfmt6bfzcmrq

Predictive Coding: a Theoretical and Experimental Review [article]

Beren Millidge, Anil Seth, Christopher L Buckley
2022 arXiv   pre-print
Predictive coding offers a potentially unifying account of cortical function -- postulating that the core function of the brain is to minimize prediction errors with respect to a generative model of the  ...  plausibility for implementation in the brain and the concrete neurophysiological and psychological predictions made by the theory.  ...  Acknowledgements We would like to thanks Alexander Tschantz, Conor Heins, and Rafal Bogacz for useful discussions about this manuscript and on predictive coding in general.  ... 
arXiv:2107.12979v4 fatcat:wfzvlaek7zbfhnhda4ljxuvyh4

Cognitive computational neuroscience [article]

Nikolaus Kriegeskorte, Pamela K. Douglas
2018 arXiv   pre-print
It is time to assemble the pieces of the puzzle of brain computation. Here we review recent work in the intersection of cognitive science, computational neuroscience, and artificial intelligence.  ...  To learn how cognition is implemented in the brain, we must build computational models that can perform cognitive tasks, and test such models with brain and behavioral experiments.  ...  Acknowledgements This paper benefited from discussions in the context of the new conference Cognitive Computational Neuroscience, which had its inaugural meeting in New York City in September 2017. 194  ... 
arXiv:1807.11819v1 fatcat:pzzyqaj4qraslnku243fhyqanu

Robotic Action Control: On the Crossroads of Cognitive Psychology and Cognitive Robotics [chapter]

Roy de Kleijn, George Kachergis, Bernhard Hommel
2015 Cognitive Robotics  
[48] for generalization problems with deep neural networks).  ...  The representations learned by such networks are somewhat more biologically-plausible than geon decompositions, and thus may be more suitable for generalization (although cf.  ... 
doi:10.1201/b19171-16 fatcat:mzhfxmdlpnbnbjabtzcsxj7hbu

A Robust Backpropagation-Free Framework for Images [article]

Timothy Zee, Alexander G. Ororbia, Ankur Mali, Ifeoma Nwogu
2022 arXiv   pre-print
We present a more biologically plausible approach, the error-kernel driven activation alignment (EKDAA) algorithm, to train convolution neural networks (CNNs) using locally derived error transmission kernels  ...  While current deep learning algorithms have been successful for a wide variety of artificial intelligence (AI) tasks, including those involving structured image data, they present deep neurophysiological  ...  as biologically implausible for various reasons, including the implausibility of the direct backwards propagation of error derivatives for synaptic updates -this is considered a deep conceptual issue  ... 
arXiv:2206.01820v1 fatcat:idf5isybdbfvjckjkzupx6hlfa

Where Do Features Come From?

Geoffrey Hinton
2013 Cognitive Science  
Using a stack of RBMs to initialize the weights of a feedforward neural network allows backpropagation to work effectively in much deeper networks and it leads to much better generalization.  ...  It is possible to learn multiple layers of non-linear features by backpropagating error derivatives through a feedforward neural network.  ...  These three methods resulted in a very good way to initialize the weights of deterministic feedforward neural networks. With this initialization, backpropagation works much better.  ... 
doi:10.1111/cogs.12049 pmid:23800216 fatcat:gsqrcryoazeivp2vjr44n6mv2q

Relaxing the Constraints on Predictive Coding Models [article]

Beren Millidge, Alexander Tschantz, Anil Seth, Christopher L Buckley
2020 arXiv   pre-print
for training deep networks.  ...  Moreover, under certain conditions, predictive coding has been shown to approximate the backpropagation of error algorithm, and thus provides a relatively biologically plausible credit-assignment mechanism  ...  AKS is additionally grateful to the Canadian Institute for AdvancedResearch (Azrieli Programme on Brain, Mind, and Consciousness).  ... 
arXiv:2010.01047v2 fatcat:armb7kpybfgjbeoba2y3apdraa

The brain as an efficient and robust adaptive learner [article]

Sophie Denève, Alireza Alemi, Ralph Bourdoukan
2017 arXiv   pre-print
However, this is greatly complicated by the credit assignment problem for learning in recurrent network, e.g. the contribution of each connection to the global output error cannot be determined based only  ...  Most sensory and motor tasks can be described as dynamical systems and could presumably be learned by adjusting connection weights in a recurrent biological neural network.  ...  Figure 3 approximately here To achieve a more biologically plausible network, we fold the network once more to obtain a recurrent network with slow/fast connections.  ... 
arXiv:1705.08031v1 fatcat:ibhfp32exbbitnw3befgehmx34

Capturing the objects of vision with neural networks [article]

Benjamin Peters, Nikolaus Kriegeskorte
2021 arXiv   pre-print
The cognitive literature provides a starting point for the development of new experimental tasks that reveal mechanisms of human object perception and serve as benchmarks driving development of deep neural  ...  Object representations emancipate perception from the sensory input, enabling us to keep in mind that which is out of sight and to use perceptual content as a basis for action and symbolic cognition.  ...  Most deep generative models amortize the inference into a feedforward recognition model.  ... 
arXiv:2109.03351v1 fatcat:wlkibi4xrvgtrnrgl5ywit7pma

Spike Event Based Learning in Neural Networks [article]

James A. Henderson, TingTing A. Gibson, Janet Wiles
2015 arXiv   pre-print
A scheme is derived for learning connectivity in spiking neural networks. The scheme learns instantaneous firing rates that are conditional on the activity in other parts of the network.  ...  This learning scheme is demonstrated using a layered feedforward spiking neural network trained self-supervised on a prediction and classification task for moving MNIST images collected using a Dynamic  ...  Spiking is a salient feature of biological neurons that is not typically present in deep learning networks.  ... 
arXiv:1502.05777v1 fatcat:g7gsosh73fhnvkntxgobst5blu

Deep learning in spiking neural networks

Amirhossein Tavanaei, Masoud Ghodrati, Saeed Reza Kheradpisheh, Timothée Masquelier, Anthony Maida
2019 Neural Networks  
In recent years, deep learning has been a revolution in the field of machine learning, for computer vision in particular.  ...  In this approach, a deep (multilayer) artificial neural network (ANN) is trained in a supervised manner using backpropagation.  ...  Towards linking biologically plausible learning methods and conventional learning algorithms in neural networks, a number of deep SNNs have recently been developed. For example, Bengio et al.  ... 
doi:10.1016/j.neunet.2018.12.002 fatcat:nfat4xwh5bdtfhauugyqpxhnzq
« Previous Showing results 1 — 15 out of 1,071 results