279 Hits in 4.7 sec

Optimizing microstimulation using a reinforcement learning framework

Austin J. Brockmeier, John S. Choi, Marcello M. DiStasio, Joseph T. Francis, Jose C. Principe
2011 2011 Annual International Conference of the IEEE Engineering in Medicine and Biology Society  
We propose using reinforcement learning as a framework to balance the exploration of the parameter space and the continued selection of promising parameters for further stimulation.  ...  In the case of existing somatosensory function, a template of the natural response can be used as a template of desired response elicited by electrical microstimulation.  ...  CONCLUSION In this work we propose using reinforcement learning as a framework for online selection of microstimulation parameters to elicit an evoked response close to a natural template.  ... 
doi:10.1109/iembs.2011.6090249 pmid:22254498 dblp:conf/embc/BrockmeierCDFP11 fatcat:pljlfblbwjfo5anheq33u3cs6e

Repairing lesions via kernel adaptive inverse control in a biomimetic model of sensorimotor cortex

Kan Li, Salvador Dura-Bernal, Joseph T. Francis, William W. Lytton, Jose C. Principe
2015 2015 7th International IEEE/EMBS Conference on Neural Engineering (NER)  
In order to estimate the optimal microstimulation sequences, we construct an inverse model of the target system.  ...  In this paper we propose a kernel adaptive filtering (KAF) approach to repair lesions via microstimulation in a biomimetic spiking neural network of sensorimotor cortex.  ...  It is trained using spike-timing dependent reinforcement learning to drive a realistic virtual musculoskeletal arm in a motor task requiring convergence on a single target.  ... 
doi:10.1109/ner.2015.7146663 dblp:conf/ner/LiDFLP15 fatcat:7dolhxe475dxhognemy52aj4vi

Exploiting Exploration: Past Outcomes and Future Actions

Kenway Louie
2013 Neuron  
The core idea in these reinforcement learning (RL) models is that agents acquire information about the value of actions through interaction with the environment, using reward to guide the learning process  ...  Ultimately, these results point toward a more nuanced view of reinforcement learning in the brain.  ... 
doi:10.1016/j.neuron.2013.09.016 pmid:24094098 fatcat:qd4lfi5oe5h53omavkvccb3csm

Page 4255 of Psychological Abstracts Vol. 93, Issue 12 [page]

2006 Psychological Abstracts  
Whereas little is known as to how such adjustments in behavioral policy are implemented, recent learning models suggest that the anterior striatum is optimally positioned to have a role in this process  ...  Caudate activity during reinforcement was closely correlated with the rate of learning and peaked during the steepest portion of the learning curve when new associations were being acquired.  ... 

Caudate Microstimulation Increases Value of Specific Choices

Samantha R. Santacruz, Erin L. Rich, Joni D. Wallis, Jose M. Carmena
2017 Current Biology  
The modulation of choice behavior using microstimulation was best modeled as resulting from changes in stimulus value.  ...  Electrical microstimulation is known to induce neural plasticity [10, 11] and caudate microstimulation in primates has been shown to accelerate associative learning [12, 13] .  ...  In this work, we developed a choice task where optimal decisions were not spatially lateralized and asked how microstimulation would alter behavior.  ... 
doi:10.1016/j.cub.2017.09.051 pmid:29107551 pmcid:PMC5773342 fatcat:u23jzcfxcvdxfhqcgss2jzmxkq

Eliciting naturalistic cortical responses with a sensory prosthesis via optimized microstimulation

John S Choi, Austin J Brockmeier, David B McNiel, Lee M von Kraus, José C Príncipe, Joseph T Francis
2016 Journal of Neural Engineering  
However, systematic methods for encoding a wide array of naturally occurring stimuli into biomimetic percepts via multi-channel microstimulation are lacking.  ...  We address this problem by first modeling the dynamical input-output relationship between multichannel microstimulation and downstream neural responses, and then optimizing the input pattern to reproduce  ...  These calibrations could also be optimized under a reinforcement learning framework [58] in which user-generated evaluative feedback could drive fine adjustments to parameters over time.  ... 
doi:10.1088/1741-2560/13/5/056007 pmid:27518368 fatcat:em32xynwjba4rn6l6t4tg4yjx4


Raghu Sesha Iyengar, Kapardi Mallampalli, Mohan Raghavan
2021 bioRxiv   pre-print
In this paper, we provide a framework to build a digital twin of relevant sections of the human spinal cord using our NEUROiD platform.  ...  We then build a framework to learn the supraspinal activations necessary to perform a simple goal directed movement of the upper limb.  ...  The reinforcement learning algorithm used is the Proximal Policy Optimization ( [61] ) to find the optimal control policy. Figure 3 (a) shows the PPO setup used for this paper.  ... 
doi:10.1101/2021.03.28.437396 fatcat:d5glqpcf35d6lhdw2e42xyfta4

Executive control of gaze by the frontal lobes

2007 Cognitive, Affective, & Behavioral Neuroscience  
learning.  ...  These insights grew out of a synthesis of conceptual frameworks and a coordination of neurophysiological, psychophysical, and mathematical modeling techniques.  ...  This executive module could use error, feedback, or conflict as the control signals. As a first step, we are exploring how conflict can be used for self-control.  ... 
doi:10.3758/cabn.7.4.396 pmid:18189013 fatcat:vzm2ailrmjazxiv5l23mkxqkzy

Automatic Training of Rat Cyborgs for Navigation

Yipeng Yu, Zhaohui Wu, Kedi Xu, Yongyue Gong, Nenggan Zheng, Xiaoxiang Zheng, Gang Pan
2016 Computational Intelligence and Neuroscience  
A hierarchical framework is proposed to facilitate the colearning between rats and machines.  ...  In the framework, the behavioral states of a rat cyborg are visually sensed by a camera, a parameterized state machine is employed to model the training action transitions triggered by rat's behavioral  ...  Before a rat cyborg can be used for navigation, a manual training process is needed to reinforce the desired behaviors (turning left, turning right, and moving forward) by pairing the behaviors with the  ... 
doi:10.1155/2016/6459251 pmid:27436999 pmcid:PMC4942600 fatcat:mq7k4eawhbglhbcml7n3mrk4sa

Brain Co-Processors: Using AI to Restore and Augment Brain Function [article]

Rajesh P. N. Rao
2020 arXiv   pre-print
We describe a new framework for developing brain co-processors based on artificial neural networks, deep learning and reinforcement learning.  ...  In this article, we introduce brain co-processors, devices that combine decoding and encoding in a unified framework using artificial intelligence (AI) to supplement or augment brain function.  ...  Acknowledgments This work was supported by NSF grant EEC-1028725, CRCNS/NIMH grant no. 1R01MH112166-01, a CJ and Elizabeth Hwang endowed professorship and a grant from the W. M. Keck Foundation.  ... 
arXiv:2012.03378v1 fatcat:5gm3sihnebalfkj36n3fmdmj2a

A common framework for perceptual learning

Aaron R Seitz, Hubert R Dinse
2007 Current Opinion in Neurobiology  
We suggest that the key to learning is to boost stimulus-related activity that is normally insufficient exceed a learning threshold.  ...  We discuss how factors such as attention and reinforcement have crucial, permissive roles in learning.  ...  Also, paradigms that show evidence of passive learning [22-23], reinforcement processes in learning [24] [25] [26] or how stimulation procedures result in learning [27 ,28 ] are often used as evidence  ... 
doi:10.1016/j.conb.2007.02.004 pmid:17317151 fatcat:2wbqtoh54bakllo26dc4warfqe

Does the Superior Colliculus Control Perceptual Sensitivity or Choice Bias during Attention? Evidence from a Multialternative Decision Framework

Devarajan Sridharan, Nicholas A. Steinmetz, Tirin Moore, Eric I. Knudsen
2017 Journal of Neuroscience  
Here we present and validate a novel decision framework for analyzing behavioral data in multialternative attention tasks.  ...  The findings lead to a testable mechanistic framework of how the midbrain and forebrain networks interact to control spatial attention.  ...  The m-ADC model, developed and validated here with behavioral data, provides a principled framework for ana-  ... 
doi:10.1523/jneurosci.4505-14.2017 pmid:28100734 pmcid:PMC5242403 fatcat:gmtiknxaafftzcntewovbhusgm

Interfacing With the Computational Brain

A. Jackson, E. E. Fetz
2011 IEEE transactions on neural systems and rehabilitation engineering  
These concepts also provide a framework for understanding the improvements in performance seen in myoelectric-controlled interface (MCI) and brain-machine interface (BMI) paradigms.  ...  Neuroscience is just beginning to understand the neural computations that underlie our remarkable capacity to learn new motor tasks.  ...  Jackson is a graduate member of the Institute of Physics and a member of the Society for Neuroscience.  ... 
doi:10.1109/tnsre.2011.2158586 pmid:21659037 pmcid:PMC3372096 fatcat:bc55bvizdvgn7ag7dk7pqc67j4

Learning where to look for a hidden target

L. Chukoskie, J. Snider, M. C. Mozer, R. J. Krauzlis, T. J. Sejnowski
2013 Proceedings of the National Academy of Sciences of the United States of America  
Learning trajectories were well characterized by a simple reinforcement-learning (RL) model that maintained and continually updated a reward map of locations.  ...  Subjects use these learned associations as well as other context-based experience, such as stimulus probability, and past rewards and penalties (25-27) to hone the aim of a saccadic  ...  This article is a PNAS Direct Submission. 1 To whom correspondence should be addressed. E-mail:  ... 
doi:10.1073/pnas.1301216110 pmid:23754404 pmcid:PMC3690606 fatcat:uvivo3iotrbhfapc45u5kttd2q

The Basal Ganglia's Contributions to Perceptual Decision Making

Long Ding, Joshua I. Gold
2013 Neuron  
Perceptual decision making is a computationally demanding process that requires the brain to interpret incoming sensory information in the context of goals, expectations, preferences, and other factors  ...  These roles probably share common mechanisms with the basal ganglia's other, more well-established functions in motor control, learning, and other aspects of cognition and thus can provide insights into  ...  Evaluation and Learning Since the discovery of reward prediction error signals in the dopaminergic neurons, reinforcement learning-especially the so-called temporal-difference learning processes-has been  ... 
doi:10.1016/j.neuron.2013.07.042 pmid:23972593 pmcid:PMC3771079 fatcat:jab27xwqcbgtvagg2gzpfozq6e
« Previous Showing results 1 — 15 out of 279 results