A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2018; you can also visit the original URL.
The file type is application/pdf
.
Filters
On general systems with randomly occurring incomplete information
2016
International Journal of General Systems
are taken into account in the framework of discrete-time memristive recurrent neural networks for the first time. ...
In the paper entitled "H ∞ State Estimation for Discrete-time Memristive Recurrent Neural Networks with Stochastic Time-delays" by H. ...
doi:10.1080/03081079.2015.1106734
fatcat:3uvg6qgeu5bipn7cbon3lfovyy
Attractive Periodic Sets in Discrete-Time Recurrent Networks (with Emphasis on Fixed-Point Stability and Bifurcations in Two-Neuron Networks)
2001
Neural Computation
Finally, for an N-neuron recurrent network, we give lower bounds on the rate of convergence of attractive periodic points toward the saturation values of neuron activations, as the absolute values of connection ...
We perform a detailed xed-point analysis of two-unit recurrent neural networks with sigmoid-shaped transfer functions. ...
Hirsch (1994) has pointed out that while there is a saturation result for stable limit cycles of continuous-time networks-for suf ciently high gain, the output along a stable limit cycle is saturated ...
doi:10.1162/08997660152002898
pmid:11387050
fatcat:riuiczchvnbzrbqnmycyu2w3ui
Learning accurate path integration in a ring attractor model of the head direction system
[article]
2021
bioRxiv
pre-print
with different gains. ...
The mature network is a quasi-continuous attractor and reproduces key experiments in which optogenetic stimulation controls the internal representation of heading, and where the network remaps to integrate ...
reaching saturation.
1206 Additionally, because the recurrent input is filtered in time (Eq. ...
doi:10.1101/2021.03.12.435035
fatcat:illqgsybfnejfneful6uklefyi
Learning accurate path integration in ring attractor models of the head direction system
2022
eLife
with different gains in rodents. ...
The mature network is a quasi-continuous attractor and reproduces key experiments in which optogenetic stimulation controls the internal representation of heading, and where the network remaps to integrate ...
The funding sources were not involved in study design, data collection and interpretation, or the decision to submit the work for publication. ...
doi:10.7554/elife.69841
pmid:35723252
pmcid:PMC9286743
fatcat:25ypm4hebbcw3oiqzp3upmlco4
Coherent Neural Networks and Their Applications to Control and Signal Processing
[chapter]
1999
World Scientific Series in Robotics and Intelligent Systems
All operations are synchronous at discrete time steps. The amplitude constant A in (5) is chosen at A = 1 :0 [19] . ...
The neuron number N , discrete phase number L, saturation amplitude A, and neuron gain g is 50, 8, 1, and 10, respectively, in this experiment. ...
doi:10.1142/9789812816528_0014
fatcat:hm5vt2nucnag3inn4eeokmcfzm
Cellular neural network as a non-linear filter of impulse noise
2017
2017 20th Conference of Open Innovations Association (FRUCT)
Feedforward discrete-time cellular neural network for filtering of impulse noise from two-dimensional (image) signals is represented. ...
It is shown that the cellular neural network surpasses median filter, Volterra filter and perceptron neural network in accuracy of image restoration and in simplicity of filter implementation. ...
x n x n dt t where n is the discrete normalized time. ...
doi:10.23919/fruct.2017.8071343
dblp:conf/fruct/Solovyeva17
fatcat:xkbfjgz4rbhyfkjduizgxwmwli
DR-RNN: A deep residual recurrent neural network for model reduction
[article]
2017
arXiv
pre-print
We introduce a deep residual recurrent neural network (DR-RNN) as an efficient model reduction technique for nonlinear dynamical systems. ...
We also show significant gains in accuracy by increasing the depth of proposed DR-RNN similar to other applications of deep learning. ...
Standard Recurrent Neural Network Recurrent Neural Network (RNN) is a neural network that has at least one feedback connection in addition to the feedforward connections [28] . ...
arXiv:1709.00939v1
fatcat:lrvy22vxnvfxdeb4yitz2if2ta
Noise Tolerance of Attractor and Feedforward Memory Models
2012
Neural Computation
An online supplement is available at ...
networks can amplify signals they receive faster than noise accumulates over time. ...
This research was conducted in the absence of any commercial or financial relationships that could be constructed as a potential conflict of interest. ...
doi:10.1162/neco_a_00234
pmid:22091664
pmcid:PMC5529185
fatcat:s32tvgjzbrdezebw6sus65so24
Stability of discrete memory states to stochastic fluctuations in neuronal systems
2006
Chaos
We will first discuss a strongly recurrent cortical network model endowed with feedback loops, for short-term memory. ...
For the neuronal network we report interesting ramping temporal dynamics as a result of sequentially switching an increasing number of discrete, bistable, units. ...
In a recurrent network, this allows for a DOWN state that is not silent, in fact with a firing rate as high as 10 Hz in our case. ...
doi:10.1063/1.2208923
pmid:16822041
pmcid:PMC3897304
fatcat:4zh6cs2nr5b33b2ckgdphlpoy4
Modulating the granularity of category formation by global cortical states
2008
Frontiers in Computational Neuroscience
We show that a competitive network, shaped by recurrent inhibition and endowed with Hebbian and homeostatic synaptic plasticity, can enforce stimulus categorization. ...
The degree of competition is internally controlled by the neuronal gain and the strength of inhibition. ...
ACKNOWLEDGEMENTS We would like to acknowledge Stefano Fusi for his continuing support, especially in the initial phase of the work, for many inspiring discussions, IT network (20 × 20 neurons) to all ...
doi:10.3389/neuro.10.001.2008
pmid:18946531
pmcid:PMC2525940
fatcat:7q6boz72v5csdftkgzy6ayozsy
DESIGN AND LEARNING WITH CELLULAR NEURAL NETWORKS
1996
International journal of circuit theory and applications
This is of course also true in the case of continuous-time and discrete-time cellular neural networks (CT-CNNs and DT-CNNs), where the local and translationally invariant interconnections are put together ...
Here the discretization of space (and time) plays a central role in arriving at a set of ODES (or difference equations), which can be easily mapped onto a CT-CNN (or DT-CNN).5 Of utmost importance is to ...
doi:10.1002/(sici)1097-007x(199601/02)24:1<15::aid-cta900>3.0.co;2-5
fatcat:zxz3svirlvdbjbh75fcz2a4bpm
Computation in Dynamically Bounded Asymmetric Systems
2015
PLoS Computational Biology
Previous explanations of computations performed by recurrent networks have focused on symmetrically connected saturating neurons and their convergence toward attractors. ...
This inherent boundedness permits the network to operate with the unstably high gain necessary to continually switch its states as it searches for a solution. ...
This signal restoration is achieved by extremely high gain, so that a small input bias will drive the node into saturation at one of its two voltage limits. ...
doi:10.1371/journal.pcbi.1004039
pmid:25617645
pmcid:PMC4305289
fatcat:nrxl3yo74jgbvgteoelnwcnjqm
Learning to Generate Compositional Color Descriptions
[article]
2016
arXiv
pre-print
We present an effective approach to generating color descriptions using recurrent neural networks and a Fourier-transformed color representation. ...
), and compositional phrases ("faded teal") not seen in training. ...
This research was supported in part by the Stanford Data Science Initiative, NSF BCS 1456077, and NSF IIS 1159679. ...
arXiv:1606.03821v2
fatcat:pn256zcksvc5hismwpr2e7qxpu
Learning to Generate Compositional Color Descriptions
2016
Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing
We present an effective approach to generating color descriptions using recurrent neural networks and a Fouriertransformed color representation. ...
), and compositional phrases ("faded teal") not seen in training. ...
This research was supported in part by the Stanford Data Science Initiative, NSF BCS 1456077, and NSF IIS 1159679. ...
doi:10.18653/v1/d16-1243
dblp:conf/emnlp/MonroeGP16
fatcat:nlgku4tlrbgbpp7dwz3zq74twy
Table of contents
2020
2020 IEEE 9th Data Driven Control and Learning Systems Conference (DDCLS)
Xin Song 314 Discrete-time Recurrent Neural Network for Solving Discrete-form Time-variant Complex Division ………..………………………………….……... ...
……Jian Liu, Xiaoe Ruan, Yamiao Zhang 486 Iterative Learning Control for Multiple Time-Delays Discrete Systems in Finite Frequency Domain …………...……………………………………...... ...
doi:10.1109/ddcls49620.2020.9275156
fatcat:kl3b4ptikjhzjn6p7eoqcmwypa
« Previous
Showing results 1 — 15 out of 13,663 results