Filters








1,063 Hits in 7.4 sec

Learning without gradient descent encoded by the dynamics of a neurobiological model [article]

Vivek Kurien George, Vikash Morar, Weiwei Yang, Jonathan Larson, Bryan Tower, Shweti Mahajan, Arkin Gupta, Christopher White, Gabriel A. Silva
2021 arXiv   pre-print
Here, we introduce a fundamentally novel conceptual approach to machine learning that takes advantage of a neurobiologically derived model of dynamic signaling, constrained by the geometric structure of  ...  The success of state-of-the-art machine learning is essentially all based on different variations of gradient descent algorithms that minimize some version of a cost or loss function.  ...  Introduction In general, the tremendous success and achievements of the many flavors of machine learning (ML) are based on variations of gradient descent algorithms that minimize some version of a cost  ... 
arXiv:2103.08878v2 fatcat:6xmrkwed6vcazj4rmcwipcqsdq

The free-energy principle: a rough guide to the brain?

Karl Friston
2009 Trends in Cognitive Sciences  
It rests on advances in statistical physics, theoretical biology and machine learning to explain a remarkable range of facts about brain structure and function.  ...  We could have just scratched the surface of what this formulation offers; for example, it is becoming clear that the Bayesian brain is just one facet of the free-energy principle and that perception is  ...  Acknowledgements The Wellcome Trust funded this work.  ... 
doi:10.1016/j.tics.2009.04.005 pmid:19559644 fatcat:sjwv5jcr7fhg7f3f4sucusie5y

Overcoming catastrophic forgetting in neural networks

James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A. Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, Demis Hassabis, Claudia Clopath (+2 others)
2017 Proceedings of the National Academy of Sciences of the United States of America  
knowledge is durably encoded by rendering a proportion of synapses less plastic and therefore stable over long timescales.  ...  These experimental findings-together with neurobiological models such as the cascade model (15, 16)-suggest that continual learning in the neocortex relies on task-specific synaptic consolidation, whereby  ...  C.C. was funded by the Wellcome Trust, the Engineering and Physical Sciences Research Council, and the Google Faculty Award.  ... 
doi:10.1073/pnas.1611835114 pmid:28292907 pmcid:PMC5380101 fatcat:ycc27dlo3bbvtei6rdylnjbo6a

Surrogate Gradient Learning in Spiking Neural Networks [article]

Emre O. Neftci, Hesham Mostafa, Friedemann Zenke
2019 arXiv   pre-print
This article elucidates step-by-step the problems typically encountered when training spiking neural networks, and guides the reader through the key concepts of synaptic plasticity and data-driven learning  ...  However, their training requires overcoming a number of challenges linked to their binary and dynamical nature.  ...  Gradient-descent learning solves this credit assignment problem by providing explicit expressions for these updates through the chain rule of derivatives.  ... 
arXiv:1901.09948v2 fatcat:jdmvfo2xy5empf62k77qcdfzja

Iterative free-energy optimization for recurrent neural networks (INFERNO)

Alexandre Pitti, Philippe Gaussier, Mathias Quoy, Wael El-Deredy
2017 PLoS ONE  
This vector can be learned then by one associative memory as a model of the basal-ganglia to control the recurrent neural network.  ...  Using a stochastic gradient descent, a reinforcement signal (presumably dopaminergic) evaluates the quality of one input vector to move the recurrent neural network to a desired activity; depending on  ...  Acknowledgments We thank Souheil Hanoune and Karl Friston as well as the reviewers for interesting comments and fruitful feedback on anterior versions of the paper.  ... 
doi:10.1371/journal.pone.0173684 pmid:28282439 pmcid:PMC5345841 fatcat:ltk2vtdwprgc3hdki4kwkdmx24

Continual Learning Through Synaptic Intelligence [article]

Friedemann Zenke, Ben Poole, Surya Ganguli
2017 arXiv   pre-print
While deep learning has led to remarkable advances across diverse applications, it struggles in domains where the data distribution changes over the course of learning.  ...  We evaluate our approach on continual learning of classification tasks, and show that it dramatically reduces forgetting while maintaining computational efficiency.  ...  Acknowledgements The authors thank Subhaneil Lahiri for helpful discussions.  ... 
arXiv:1703.04200v3 fatcat:6icrj2ze5fhozn6cnn7hb2z5pu

Predictive Coding: a Theoretical and Experimental Review [article]

Beren Millidge, Anil Seth, Christopher L Buckley
2022 arXiv   pre-print
Predictive coding offers a potentially unifying account of cortical function -- postulating that the core function of the brain is to minimize prediction errors with respect to a generative model of the  ...  We also review a wide range of classic and recent work within the framework, ranging from the neurobiologically realistic microcircuits that could implement predictive coding, to the close relationship  ...  Mathematically, a dynamical update rule for the precisions can be derived as a gradient descent on the free-energy.  ... 
arXiv:2107.12979v2 fatcat:hdo7pyz3gzhvrgmsesfpd4mhyq

Towards Biologically Plausible Deep Learning [article]

Yoshua Bengio, Dong-Hyun Lee, Jorg Bornschein, Thomas Mesnard and Zhouhan Lin
2016 arXiv   pre-print
learning point of view and can be interpreted as gradient descent on some objective function so long as the neuronal dynamics push firing rates towards better values of the objective function (be it supervised  ...  The second main idea is that this corresponds to a form of the variational EM algorithm, i.e., with approximate rather than exact posteriors, implemented by neural dynamics.  ...  Acknowledgments The authors would like to thank Jyri Kivinen, Tim Lillicrap and Saizheng Zhang for feedback and discussions, as well as NSERC, CIFAR, Samsung and Canada Research Chairs for funding, and  ... 
arXiv:1502.04156v3 fatcat:ilgfbdil6zg6fb6cq3bxocvqra

Data and Power Efficient Intelligence with Neuromorphic Learning Machines

Emre O. Neftci
2018 iScience  
We find that (1) recent work in binary deep networks and approximate gradient descent learning are strikingly compatible with a neuromorphic substrate; (2) where real-time adaptability and autonomy are  ...  necessary, neuromorphic technologies can achieve significant advantages over mainstream ones; and (3) challenges in memory technologies, compounded by a tradition of bottom-up approaches in the field,  ...  ACKNOWLEDGMENTS This work was partly supported by the Intel Corporation, the National Science Foundation under grant 1652159, and by the Korean Institute of Science and Technology.  ... 
doi:10.1016/j.isci.2018.06.010 pmid:30240646 pmcid:PMC6123858 fatcat:zo4dvtgo75c7pn6n7tal24fkly

Functional identification of biological neural networks using reservoir adaptation for point processes

Tayfun Gürel, Stefan Rotter, Ulrich Egert
2009 Journal of Computational Neuroscience  
The proposed ESN algorithm learns a predictive model of stimulus-response relations in in vitro and simulated networks, i.e. it models their response dynamics.  ...  Employing feed-forward and recurrent networks with fading memory, i.e. reservoirs, we propose a point process based learning algorithm to train the internal parameters of the reservoir and the connectivity  ...  Acknowledgements This work was supported in part by the German BMBF (01GQ0420), EC-NEURO (12788) and BFNT (01GQ0830) grants.  ... 
doi:10.1007/s10827-009-0176-0 pmid:19639401 pmcid:PMC2940037 fatcat:baygyf7opnfvpng7vgn4e4zaoq

Neural network models and deep learning

Nikolaus Kriegeskorte, Tal Golan
2019 Current Biology  
Originally inspired by neurobiology, deep neural network models have become a powerful tool of machine learning and artificial intelligence.  ...  They can approximate functions and dynamics by learning from examples. Here we give a brief introduction to neural network models and deep learning for biologists.  ...  One approach is called encoding models .  ... 
doi:10.1016/j.cub.2019.02.034 pmid:30939301 fatcat:yuo75bhphjextbrkkys6i44i5y

Reverse engineering neural networks to characterise their cost functions [article]

Takuya Isomura, Karl J. Friston
2019 bioRxiv   pre-print
This means that the Bayes optimal encoding of latent or hidden states is achieved when, and only when, the network's implicit priors match the process generating inputs.  ...  This work considers a class of biologically plausible cost functions for neural networks, where the same cost function is minimised by both neural activity and plasticity.  ...  Here, we simulate the dynamics of neural activity and synaptic 405 strengths when they perform a gradient descent on the cost function in Equation(22).  ... 
doi:10.1101/654467 fatcat:djxpbyn5x5ag7mv7ysdq5vp42m

Reservoir computing and the Sooner-is-Better bottleneck

Stefan L. Frank, Hartmut Fitz
2016 Behavioral and Brain Sciences  
This principle is demonstrated by "reservoir computing": Untrained recurrent neural networks project input sequences onto a random point in high-dimensional state space.  ...  Prior language input is not lost but integrated with the current input.  ...  A "read-out" network is then calibrated, either online through gradient descent or offline by linear regression, to transform this random mapping into a desired output, such as a prediction of the incoming  ... 
doi:10.1017/s0140525x15000783 pmid:27561374 fatcat:euqdvxe2vzcpxliadhowz3w22e

Deep neural networks: a new framework for modelling biological vision and brain information processing [article]

Nikolaus Kriegeskorte
2015 bioRxiv   pre-print
With human-level performance no longer out of reach, we are entering an exciting new era, in which we will be able to build neurobiologically faithful feedforward and recurrent computational models of  ...  Artificial neural networks are inspired by the brain and their computations could be implemented in biological neurons.  ...  Representations can be learned by gradient descent using the backpropagation algorithm.  ... 
doi:10.1101/029876 fatcat:lxuwpdhzrvhpdmtyzg33ogwncy

Applications of the Free Energy Principle to Machine Learning and Neuroscience [article]

Beren Millidge
2021 arXiv   pre-print
In this PhD thesis, we explore and apply methods inspired by the free energy principle to two important areas in machine learning and neuroscience.  ...  Firstly, we focus on predictive coding, a neurobiologically plausible process theory derived from the free energy principle which argues that the primary function of the brain is to minimize prediction  ...  By applying the Ao decomposition, we can understand the dynamics in terms of a gradient descent upon the surprisal.  ... 
arXiv:2107.00140v1 fatcat:c6phd65xwfc2rcyq7pnth5a3pq
« Previous Showing results 1 — 15 out of 1,063 results