Filters








127,538 Hits in 6.4 sec

Reinforcement Learning for Modeling Large-Scale Cognitive Reasoning

Ying Zhao, Emily Mooren, Nate Derbinsky
2017 Proceedings of the 9th International Joint Conference on Knowledge Discovery, Knowledge Engineering and Knowledge Management  
Soar is a cognitive architecture that can be used to model complex reasoning, cognitive functions, and decision making for warfighting processes like the ones in a kill chain.  ...  In this paper, we present a feasibility study of Soar, and in particular the reinforcement learning (RL) module, for optimal decision making using existing expert systems and smart data.  ...  ACKNOWLEDGEMENTS Thanks to the Naval Postgraduate School's Naval Research Program for funding this research. Thanks to Mr.  ... 
doi:10.5220/0006508702330238 dblp:conf/ic3k/ZhaoMD17 fatcat:odfo3mr6lbbc3dcpiigfykpi34

Research on the Brain-inspired Cross-modal Neural Cognitive Computing Framework [article]

Yang Liu
2018 arXiv   pre-print
The Multimedia Neural Cognitive Computing (MNCC) model was designed based on the nervous mechanism and cognitive architecture.  ...  Furthermore, the semantic-oriented hierarchical Cross-modal Neural Cognitive Computing (CNCC) framework was proposed based on MNCC model, and formal description and analysis for CNCC framework was given  ...  incremental computation and feedback of reinforcement learning based on object-oriented and multi-scale, including two dynamic processes as follows: (3.1) It realizes multi-scale feedback computation  ... 
arXiv:1805.01385v2 fatcat:7zj5oejdxrelbmokc37zhxex5y

Negative probabilities and counter-factual reasoning in quantum cognition

J Acacio de Barros, G Oas
2014 Physica Scripta  
We show that negative probabilities impose constraints to what types of counter-factual reasoning we can make with respect to (quantum) internal representations of the decision maker.  ...  Though it has decreased in importance in current psychology, we chose to model SR theory for the following reasons.  ...  For example, in our model, many parameters, such as time of response, frequency of oscillations, coupling strengths, etc., were fixed based on reasonable assumptions.  ... 
doi:10.1088/0031-8949/2014/t163/014008 fatcat:k45nmqcj25aync3eokx6otmg74

A world survey of artificial brain projects, Part II: Biologically inspired cognitive architectures

Ben Goertzel, Ruiting Lian, Itamar Arel, Hugo de Garis, Shuo Chen
2010 Neurocomputing  
On the other hand, IM-CLEVER's use of reinforcement learning follows Schmidhuber's earlier work RL for cognitive robotics [32, 33] , Barto's work on intrinsically motivated reinforcement learning [34  ...  , and reinforcement learning.  ...  His research interests include theoretical and practical aspects of machine learning, biologically inspired cognitive architectures, and high-performance computing.  ... 
doi:10.1016/j.neucom.2010.08.012 fatcat:hlvx44gvcrcfniu34miufwnwsu

Higher-level Knowledge, Rational and Social Levels Constraints of the Common Model of the Mind

Antonio Lieto, William G. Kennedy, Christian Lebiere, Oscar J. Romero, Niels Taatgen, Robert L. West
2018 Procedia Computer Science  
We present the input to the discussion about the computational framework known as Common Model of Cognition (CMC) from the working group dealing with the knowledge/rational/social levels.  ...  Abstract We present the input to the discussion about the computational framework known as Common Model of Cognition (CMC) from the working group dealing with the knowledge/rational/social levels.  ...  Kralik for their feedback. The first author acknowledges the support by a MIUR RTD-A grant from the University of Turin, Department of Computer Science.  ... 
doi:10.1016/j.procs.2018.11.033 fatcat:rzbnxkroprgmzfk4eolq46jfki

Connectionist Models of Reinforcement, Imitation, and Instruction in Learning to Solve Complex Problems

F. Dandurand, T.R. Shultz
2009 IEEE Transactions on Autonomous Mental Development  
Humans and models were subjected to three learning regimes: reinforcement, imitation, and instruction.  ...  We modeled learning by reinforcement (rewards) using SARSA, a softmax selection criterion and a neural network function approximator; learning by imitation using supervised learning in a neural network  ...  ACKNOWLEDGMENT The authors would like to thank François Rivest for his insightful comments and suggestions. They also thank Kristine H. Onishi for feedback on an early version of the manuscript.  ... 
doi:10.1109/tamd.2009.2031234 fatcat:uqzm4qowq5hm3g7uiecmyok4lq

Impulsivity and predictive control are associated with suboptimal action-selection and action-value learning in regular gamblers

M.S.M. Lim, G. Jocham, L.T. Hunt, T.E.J. Behrens, R.D. Rogers
2015 International Gambling Studies  
Heightened impulsivity and cognitive biases are risk factors for gambling problems.  ...  Here, we modelled the behaviour of eighty-seven community-recruited regular, but not clinically problematic, gamblers during a binary-choice reinforcement-learning game, to characterise the relationships  ...  Reinforcement-learning model We fitted a reinforcement learning model to each participant's choices.  ... 
doi:10.1080/14459795.2015.1078835 pmid:27274706 pmcid:PMC4890653 fatcat:vclfise3szfb3myjdgco2g3p5u

Cognitive components underpinning the development of model-based learning

Tracey C.S. Potter, Nessa V. Bryce, Catherine A. Hartley
2017 Developmental Cognitive Neuroscience  
Neurosci. (2016), http://dx. a b s t r a c t Reinforcement learning theory distinguishes "model-free" learning, which fosters reflexive repetition of previously rewarded actions, from "model-based" learning  ...  ., Cognitive components underpinning the development of model-based learning. Dev. Cogn.  ...  Acknowledgements We thank Lindsay Hunter for assistance with data collection, Johannes Decker for assistance with data analysis, and Nicholas Turk-Browne for sharing the statistical learning task.  ... 
doi:10.1016/j.dcn.2016.10.005 pmid:27825732 pmcid:PMC5410189 fatcat:km3rvjh4gzdblhx3j7rddsmxaa

Toward cognitive robotics

John E. Laird, Grant R. Gerhart, Douglas W. Gage, Charles M. Shoemaker
2009 Unmanned Systems Technology XI  
Some of them are also designed so that they meet the constraints of real-time behavior, and scale to large knowledge bases without losing reactivity.  ...  These extensions include episodic memory, semantic memory, reinforcement learning, and mental imagery.  ...  and robot control systems, as it provides a framework for the development of large scale, hierarchically organized doctrine and tactical knowledge.  ... 
doi:10.1117/12.818701 fatcat:yl7tr6o3vjfivk2pdfkivp6wae

Cognitive and Behavioral Correlates of Achievement in a Complex Multi-Player Video Game

Adam M. Large, Benoit Bediou, Sezen Cekic, Yuval Hart, Daphne Bavelier, C. Shawn Green
2019 Media and Communication  
Here, a sample of over 500 players of the MOBA game League of Legends completed a large battery of cognitive tasks.  ...  Over the past 30 years, a large body of research has accrued demonstrating that video games are capable of placing substantial demands on the human cognitive, emotional, physical, and social processing  ...  for the given measure after controlling for age, not for the full model; the direction of the effects has also been standardized for all tasks except the reinforcement learning task such that positive  ... 
doi:10.17645/mac.v7i4.2314 fatcat:a2w4anqvg5cgnn57sjeq5vinvm

How much intelligence is there in artificial intelligence? A 2020 update

Han L.J. van der Maas, Lukas Snoek, Claire E. Stevenson
2021 Intelligence  
We follow with a description of the main techniques these AI breakthroughs were based upon, such as deep learning and reinforcement learning; two techniques that have deep roots in psychology.  ...  For example, are there AI systems that can solve human intelligence tests?  ...  Driven by the desire to scale up reinforcement learning models to complex environments and behaviors, 21st century researchers started to add ANNs to their reinforcement learning models.  ... 
doi:10.1016/j.intell.2021.101548 fatcat:kiuz5begebgv5enceq7vw7lkte

Blending computational and experimental neuroscience

Patricia S. Churchland, Terrence J. Sejnowski
2016 Nature Reviews Neuroscience  
A new conceptual framework for understanding cognitive behaviours based on the dynamical patterns of activity in large populations of neurons is emerging.  ...  One reason this result was surprising is that some psychologists, perhaps inspired by Chomsky's criticism of behaviourism, decried reinforcement learning as far too feeble to accomplish much in the cognitive  ...  domain and favoured models for cognition that involved rules, such as the rules of logic.  ... 
doi:10.1038/nrn.2016.114 pmid:30283241 pmcid:PMC6166881 fatcat:tonrszrvynctvghqag4t6266pi

A Goal-oriented Navigation Model based on Multi-scale Place Cells Map

Jia Du, Dewei Wu, Weilong Li, Yang Zhou
2017 International Journal of u- and e- Service, Science and Technology  
To achieve spatial cognition and autonomous navigation for the robot, learning from the biological mechanism for navigation, a goal-oriented navigation model based on multi-scale place cells map is proposed  ...  Compared with the spatial cognitive model of single scale place cells, the method not only meets the multi-scale spatial representation nature of place cells in hippocampus, but also has a faster learning  ...  But too large  may cause premature convergence. So it is important for learning speed to choose a proper  .  ... 
doi:10.14257/ijunesst.2017.10.2.08 fatcat:q7s2xkggd5evpcphrs64qfh7fe

Towards learning-to-learn [article]

Benjamin James Lansdell, Konrad Paul Kording
2019 arXiv   pre-print
Here we discuss ideas across machine learning, neuroscience, and cognitive science that matter for the principle of learning-to-learn.  ...  Indeed, cognitive science has long shown that humans learn-to-learn, which is potentially responsible for their impressive learning abilities.  ...  Acknowledgements The authors would like to thank Adam Marblestone, David Rolnick and Timothy Lillicrap for helpful feedback and discussion.  ... 
arXiv:1811.00231v3 fatcat:4g5wbfmiofdtbaa5tvsgu53jiq

An Anarchy of Methods: Current Trends in How Intelligence Is Abstracted in AI

Joel Lehman, Jeff Clune, Sebastian Risi
2014 IEEE Intelligent Systems  
deep learning to more cognitive behavior may prove problematic.  ...  Similarly, researchers focus on different processes for generating intelligence, such as learning through reinforcement, natural evolution, logical inference, and statistics.  ...  A similar need for tractable models motivated Andrew Ng's change in focus from MDP-based reinforcement learning to deep learning.  ... 
doi:10.1109/mis.2014.92 fatcat:khlf4nvqgvfzvcy37rfksuij5q
« Previous Showing results 1 — 15 out of 127,538 results