3,182 Hits in 4.2 sec

Continuous or discrete attractors in neural circuits? A self-organized switch at maximal entropy [article]

Alberto Bernacchia
2007 arXiv   pre-print
In recurrent neural network models, continuous and discrete attractors are separately modeled by distinct forms of synaptic prescriptions (learning rules).  ...  In that case, there is no processing of sensory information and neural activity displays maximal entropy.  ...  The solutions of this equation are the attractors of the recurrent neural dynamics, in absence of the stimulus.  ... 
arXiv:0707.3511v2 fatcat:gl22fb4yxzd47abovmb7sfbcha

Artificial Neural Networks [chapter]

Hanspeter A Mallot
2013 Springer Series in Bio-/Neuroinformatics  
Klaassen Attractor Learning of Recurrent Neural Networks 371 K. Gouhara, H. Takase, Y. Uchikawa, K.  ...  Priigel-Bennet Ockham's Nets: Self-Adaptive Minimal Neural Networks 183 GD. Kendall, TJ.Hall The Generalisation Ability of Dilute Attractor Neural Networks 187 C.  ...  Self-organizing Neural Network Apllication to Technical Process Parameters Estimation 579 E. Govekar, E. Susie, P. Muzic, I. Grabec High-precision Robot Control: The Nested Network 583 A.  ... 
doi:10.1007/978-3-319-00861-5_4 fatcat:l3v3etbv6zfxlcfkxnzml4v3xu

Spatiotemporal Computations of an Excitable and Plastic Brain: Neuronal Plasticity Leads to Noise-Robust and Noise-Constructive Computations

Hazem Toutounji, Gordon Pipa, Olaf Sporns
2014 PLoS Computational Biology  
To that end, we rigorously formulate the problem of neural representations as a relation in space between stimulus-induced neural activity and the asymptotic dynamics of excitable cortical networks.  ...  Nevertheless, no unifying account exists of how neurons in a recurrent cortical network learn to compute on temporally and spatially extended stimuli.  ...  However, it remains mostly unclear how neurons in recurrent neural networks utilize neuronal plasticity to self-organize and to learn computing on temporally and spatially extended stimuli [2] [3] [4]  ... 
doi:10.1371/journal.pcbi.1003512 pmid:24651447 pmcid:PMC3961183 fatcat:6dft4ndyifeqvpcy73qnpxzxoi

Neural mechanisms underlying the temporal organization of naturalistic animal behavior [article]

Luca Mazzucato
2022 arXiv   pre-print
We crystallize recent studies which converge on an emergent mechanistic theory of temporal variability based on attractor neural networks and metastable dynamics, arising from the coordinated interactions  ...  Naturalistic animal behavior exhibits a strikingly complex organization in the temporal domain, whose variability stems from at least three sources: hierarchical, contextual, and stochastic.  ...  Acknowledgments I would like to thank Zach Mainen, James Murray, Cris Niell, Matt Smear, Osama Ahmed, the participants of the Computational Neuroethology Workshop 2021 in Jackson, and the members of the  ... 
arXiv:2203.02151v1 fatcat:m3cnresdcndz7hhr2qizhrmxlu

An Attractor-Based Complexity Measurement for Boolean Recurrent Neural Networks

Jérémie Cabessa, Alessandro E. P. Villa, Sergio Gómez
2014 PLoS ONE  
We provide a novel refined attractor-based complexity measurement for Boolean recurrent neural networks that represents an assessment of their computational power in terms of the significance of their  ...  attractor dynamics.  ...  Acknowledgments The authors thank all the reviewers for their comments, and in particular the last reviewer for his suggestions concerning the proof of Proposition 2. Author Contributions  ... 
doi:10.1371/journal.pone.0094204 pmid:24727866 pmcid:PMC3984152 fatcat:hfapzjg6qzav5pvch6u5j5crkq

Neural coordination can be enhanced by occasional interruption of normal firing patterns: A self-optimizing spiking neural network model [article]

Alexander Woodward, Tom Froese, Takashi Ikegami
2014 arXiv   pre-print
Here we demonstrate that it can be transferred to more biologically plausible neural networks by implementing a self-optimizing spiking neural network model.  ...  The state space of a conventional Hopfield network typically exhibits many different attractors of which only a small subset satisfy constraints between neurons in a globally optimal fashion.  ...  In order to construct a spiking neural network with conventional Hopfield dynamics, namely guaranteed convergence to a fixed-point attractor, we must have a fully interconnected recurrent network and a  ... 
arXiv:1409.0470v1 fatcat:vk4zucyl2vbx7bs7p5d6vitqie

Cognitive computation with autonomously active neural networks: an emerging field [article]

Claudius Gros
2009 arXiv   pre-print
The main emphasis will be then on two paradigmal neural network architectures showing continuously ongoing transient-state dynamics: saddle point networks and networks of attractor relics.  ...  Self-active neural networks are confronted with two seemingly contrasting demands: a stable internal dynamical state and sensitivity to incoming stimuli.  ...  Attractor relic networks and slow variables A trivial form of self-sustained neural activity occurs in attractor networks [39] .  ... 
arXiv:0901.3028v1 fatcat:alb6tmibmje77elxlxovv6xsva

Cognitive Computation with Autonomously Active Neural Networks: An Emerging Field

Claudius Gros
2009 Cognitive Computation  
The main emphasis will be then on two paradigmal neural network architectures showing continuously ongoing transient-state dynamics: saddle point networks and networks of attractor relics.  ...  We show, that this dilemma can be solved by networks of attractor relics based on competitive neural dynamics, where the attractor relics compete on one side with each other for transient dominance, and  ...  A trivial form of self-sustained neural activity occurs in attractor networks [39] .  ... 
doi:10.1007/s12559-008-9000-9 fatcat:ayldcifr2famrafdmuk32xetti

Chaotic itinerancy and its roles in cognitive neurodynamics

Ichiro Tsuda
2015 Current Opinion in Neurobiology  
Chaotic itinerancy is an autonomously excited trajectory through high-dimensional state space of cortical neural activity that causes the appearance of a temporal sequence of quasi-attractors.  ...  In a cognitive neurodynamic aspect, quasi-attractors represent perceptions, thoughts and memories, chaotic trajectories between them with intelligent searches, such as history-dependent trial-and-error  ...  Acknowledgments This work was partially supported by a Grant-in-Aid for Scientific Research on  ... 
doi:10.1016/j.conb.2014.08.011 pmid:25217808 fatcat:jj7qmbfcvne5bhxamb33jmzsri

Attractor and integrator networks in the brain [article]

Mikail Khona, Ila R. Fiete
2022 arXiv   pre-print
In this review, we describe the singular success of attractor neural network models in describing how the brain maintains persistent activity states for working memory, error-corrects, and integrates noisy  ...  We discuss the myriad potential uses of attractor dynamics for computation in the brain, and showcase notable examples of brain systems in which inherently low-dimensional continuous attractor dynamics  ...  Mechanisms: The construction of neural attractors The basic principle underlying the formation of attractor states in neural circuits is strong recurrent positive feedback 2 , through lateral connectivity  ... 
arXiv:2112.03978v3 fatcat:b4bmol62brdhdnxnoj2lkc4yju

Improving the State Space Organization of Untrained Recurrent Networks [chapter]

Michal Čerňanský, Matej Makula, Ľubica Beňušková
2009 Lecture Notes in Computer Science  
In this work we demonstrate that the state space organization of untrained recurrent neural network can be significantly improved by choosing appropriate input representations.  ...  Recurrent neural networks are frequently used in cognitive science community for modeling linguistic structures.  ...  In [9] we have studied the state space organization of the recurrent neural network before and after training on three artificial languages.  ... 
doi:10.1007/978-3-642-02490-0_82 fatcat:oaw7c2uez5h7dm7a37642uczju

Chaotic Clustering: Fragmentary Synchronization of Fractal Waves [chapter]

Elena N., Sofya V.
2011 Chaotic Systems  
In this paper we apply chaotic neural network to 2D and 3D clustering problem. L.  ...  Structure complexity CNN does not have classical inputs -it is recurrent neural network with one layer of N neurons.  ...  How to reference In order to correctly reference this scholarly work, feel free to copy and paste the following:  ... 
doi:10.5772/13995 fatcat:w7wpunlt35g4xj6ny5udv3fvpq

INFERNO: A Novel Architecture for Generating Long Neuronal Sequences with Spikes [chapter]

Alex Pitti, Philippe Gaussier, Mathias Quoy
2017 Lecture Notes in Computer Science  
We name our architecture INFERNO for Iterative Free-Energy Optimization for Recurrent Neural Network. abstract environment.  ...  As part of the principle of free-energy minimization proposed by Karl Friston, we propose a novel neural architecture to optimize the free-energy inherent to spiking recurrent neural networks to regulate  ...  The neural control is done by controlling tiny variations injected into the recurrent network that can iteratively change its dynamics to make it to converge to attractors.  ... 
doi:10.1007/978-3-319-59072-1_50 fatcat:zr6gnq76j5htbjothkvx6l5f3u

Continuous attractors and oculomotor control

H. Sebastian Seung
1998 Neural Networks  
Because the stable states are arranged in a continuous dynamical attractor, the network can store a memory of eye position with analog neural encoding.  ...  A recurrent neural network can possess multiple stable states, a property that many brain theories have implicated in learning and memory.  ...  A synaptic learning rule based on presynaptic activity and retinal slip has been used to self-organize a network (Arnold and Robinson, 1997) .  ... 
doi:10.1016/s0893-6080(98)00064-1 pmid:12662748 fatcat:pbtes6eiprer5b3xm6dyyds4ha

On the Nature of Functional Differentiation: The Role of Self-Organization with Constraints

Ichiro Tsuda, Hiroshi Watanabe, Hiromichi Tsukada, Yutaka Yamaguti
2022 Entropy  
Regarding the self-organized structure of neural systems, Warren McCulloch described the neural networks of the brain as being "heterarchical", rather than hierarchical, in structure.  ...  In 2016, we proposed a theory for self-organization with constraints to clarify the neural mechanism of functional differentiation.  ...  Dynamic Heterarchy As discussed above, the brain is a self-organizing system with both internal and external constraints, thus yielding the dynamically nonstationary activity of neural networks.  ... 
doi:10.3390/e24020240 pmid:35205534 pmcid:PMC8871511 fatcat:6gsuobuqc5b67l3ddcwhk432te
« Previous Showing results 1 — 15 out of 3,182 results