Filters








24,630 Hits in 3.4 sec

Against memory systems

D. Gaffan
2002 Philosophical Transactions of the Royal Society of London. Biological Sciences  
of memory trace could be stored in the temporal lobe.  ...  The widely accepted inference from this observation is that the medial temporal cortex, including the hippocampal, entorhinal and perirhinal cortex, contains a memory system or multiple memory systems,  ...  CORTICAL LOCALIZATION OF FUNCTION IS NOT ARBITRARY BUT HIERARCHICAL The hypothesis we began from, namely that the function of the medial temporal cortex is memory, is an example of the products of a widespread  ... 
doi:10.1098/rstb.2002.1110 pmid:12217178 pmcid:PMC1693020 fatcat:du2c336yjjgsdl3ud65mnaw4ym

Hierarchical Temporal Memory with Reinforcement Learning

Eduard Nugamanov, Aleksandr I. Panov
2020 Procedia Computer Science  
Nevertheless, both in the neocortex and in hierarchical temporal memory an image is recognized by its parts.  ...  Nevertheless, both in the neocortex and in hierarchical temporal memory an image is recognized by its parts.  ...  Hierarchical Temporal Memory with Reinforcement Learning Hierarchical Temporal Memory Model A region realized at the NuPIC platform slightly differs from a theoretical region.  ... 
doi:10.1016/j.procs.2020.02.123 fatcat:tv5fej2m35dt5kuw7xqp2uuiwm

Hierarchical Deep Reinforcement Learning: Integrating Temporal Abstraction and Intrinsic Motivation [article]

Tejas D. Kulkarni, Karthik R. Narasimhan, Ardavan Saeedi, Joshua B. Tenenbaum
2016 arXiv   pre-print
We present hierarchical-DQN (h-DQN), a framework to integrate hierarchical value functions, operating at different temporal scales, with intrinsically motivated deep reinforcement learning.  ...  Hernandez-Gardiol and Mahadevan [19] combined hierarchical RL with a variable length short-term memory of high-level decisions.  ...  A collection of these policies can be hierarchically arranged with temporal dynamics for learning or planning within the framework of semi-Markov decision processes [48, 49] .  ... 
arXiv:1604.06057v2 fatcat:p33suojusrcpfpg4ybc4hrfj6y

Learning Representations in Model-Free Hierarchical Reinforcement Learning

Jacob Rafati, David C. Noelle
2019 PROCEEDINGS OF THE THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE AND THE TWENTY-EIGHTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE  
Hierarchical Reinforcement Learning (HRL) methods attempt to address this scalability issue by learning action selection policies at multiple levels of temporal abstraction.  ...  We present a novel model-free method for subgoal discovery using incremental unsupervised learning over a small memory of the most recent experiences of the agent.  ...  Hierarchical Reinforcement Learning (HRL) is an important computational approach intended to tackle problems of scale by learning to operate over different levels of temporal abstraction (Sutton, Precup  ... 
doi:10.1609/aaai.v33i01.330110009 fatcat:6fcb3dzfuvdevab7i6v5gito5y

Automated State Abstraction for Options using the U-Tree Algorithm

Anders Jonsson, Andrew G. Barto
2000 Neural Information Processing Systems  
An agent can learn to choose between various temporally abstract actions, each solving an assigned subtask, to accomplish the overall task.  ...  In this paper, we study hierarchical learning using the framework of options.  ...  One motivation for using temporally abstract actions is that they can be used to exploit the hierarchical structure of a problem.  ... 
dblp:conf/nips/JonssonB00 fatcat:vcostp3iezgephe2stknze3fva

Efficient Exploration through Intrinsic Motivation Learning for Unsupervised Subgoal Discovery in Model-Free Hierarchical Reinforcement Learning [article]

Jacob Rafati, David C. Noelle
2019 arXiv   pre-print
Efficient exploration for automatic subgoal discovery is a challenging problem in Hierarchical Reinforcement Learning (HRL).  ...  We introduce a model-free subgoal discovery method based on unsupervised learning over a limited memory of agent's experiences during intrinsic motivation.  ...  In next time step, agent recieves a reward r t+1 = r, and the next state, s t+1 = s , and stores its direct experiences with the environment into an experience memory, D, Actions continue to be selected  ... 
arXiv:1911.10164v1 fatcat:vb3txi4lljd5jczvz4zjyaeldm

Evolving Hierarchical Memory-Prediction Machines in Multi-Task Reinforcement Learning [article]

Stephen Kelly, Tatiana Voegerl, Wolfgang Banzhaf, Cedric Gondro
2021 arXiv   pre-print
We show that emergent hierarchical structure in the evolving programs leads to multi-task agents that succeed by performing a temporal decomposition and encoding of the problem environments in memory.  ...  A fundamental aspect of behaviour is the ability to encode salient features of experience in memory and use these memories, in combination with current sensory information, to predict the best action for  ...  We show that emergent hierarchical structure in the evolving programs leads to multi-task agents that succeed by performing a temporal decomposition/encoding of the problem environments in memory.  ... 
arXiv:2106.12659v1 fatcat:46w34573jvgmjc66fungphcad4

Perception-Prediction-Reaction Agents for Deep Reinforcement Learning [article]

Adam Stooke, Valentin Dalibard, Siddhant M. Jayakumar, Wojciech M. Czarnecki, Max Jaderberg
2020 arXiv   pre-print
We employ a temporal hierarchy, using a slow-ticking recurrent core to allow information to flow more easily over long time spans, and three fast-ticking recurrent cores with connections designed to create  ...  We introduce a new recurrent agent architecture and associated auxiliary losses which improve reinforcement learning in partially observable tasks requiring long-term memory.  ...  Minimal Temporally Hierarchical Agent Temporal hierarchy promises to further improve the processing of long sequences by dividing responsibilities for short-and long-term memory over different recurrent  ... 
arXiv:2006.15223v1 fatcat:q6ryxrrarfbd5bayupan7lg3bi

Hierarchical Behaviours: Getting the Most Bang for Your Bit [chapter]

Sander G. van Dijk, Daniel Polani, Chrystopher L. Nehaniv
2011 Lecture Notes in Computer Science  
Hierarchical structuring of behaviour is prevalent in natural and artificial agents and can be shown to be useful for learning and performing tasks.  ...  To progress systematic understanding of these benefits we study the effect of hierarchical architectures on the required information processing capability of an optimally acting agent.  ...  found using a hierarchical, memory-less policy per option.  ... 
doi:10.1007/978-3-642-21314-4_43 fatcat:4j652vvlcbanraz4v6cfpm45tm

Learning, memory and consolidation mechanisms for behavioral control in hierarchically organized cortico‐basal ganglia systems

Silviu I. Rusu, Cyriel M. A. Pennartz
2019 Hippocampus  
We propose that frontal corticothalamic circuits form a high-level loop for memory processing that initiates and temporally organizes nested activities in lower-level loops, including the hippocampus and  ...  This structure for behavioral organization requires alignment with mechanisms for memory formation and consolidation.  ...  -mPFC projections differentially contribute to temporal order judgment and spatial memory, respectively.  ... 
doi:10.1002/hipo.23167 pmid:31617622 fatcat:elty4qd4enhz3kttul5scagyom

Learning Representations in Model-Free Hierarchical Reinforcement Learning [article]

Jacob Rafati, David C. Noelle
2019 arXiv   pre-print
Hierarchical Reinforcement Learning (HRL) methods attempt to address this scalability issue by learning action selection policies at multiple levels of temporal abstraction.  ...  In this paper, we present a novel model-free method for subgoal discovery using incremental unsupervised learning over a small memory of the most recent experiences (trajectories) of the agent.  ...  Thus, the temporal order of subgoals is another dimension of hierarchy. Hierarchical Reinforcement Learning Subproblems The rooms task has both clear skills and clear subgoals.  ... 
arXiv:1810.10096v3 fatcat:x6s77g3u4rhn7blr2zlcfkyghu

A hierarchy of time scales supports unsupervised learning of behavioral sequences

Samuel P Muscinelli, Wulfram Gerstner
2015 BMC Neuroscience  
This approach leads to the development of temporal receptive fields associated to subparts of the desired sequence with hierarchical levels of detail.  ...  Our model constitutes a promising approach to temporal learning, showing how an appropriate neural substrate with a hierarchy of time scales can lead, even without any error or reward signal, to the learning  ...  This approach leads to the development of temporal receptive fields associated to subparts of the desired sequence with hierarchical levels of detail.  ... 
doi:10.1186/1471-2202-16-s1-p78 pmcid:PMC4697588 fatcat:w6bsyprhabbmxlrppxn2lc6vc4

Future Prediction with Hierarchical Episodic Memories under Deterministic and Stochastic Environments [chapter]

Yoshito Aota, Yoshihiro Miyake
2012 Lecture Notes in Computer Science  
Here, we suggest hierarchical episodic memories implement into the model.  ...  In agreement with Bond's suggestion, we consider that episodic memories are hierarchized autonomously by simple rule. In this research, our model solves maze tasks.  ...  Introduction As Tulving [1] described, one of properties for episodic memory is temporal organization.  ... 
doi:10.1007/978-3-642-34475-6_31 fatcat:fnls3wp7ufhlxopz6rjvr4dgn4

Learning by Example in the Hippocampus

Joseph O'Neill, Jozsef Csicsvari
2014 Neuron  
In this issue of Neuron, McKenzie et al. (2014) demonstrate that the hippocampus rapidly forms interrelated, hierarchical memory representations to support schema-based learning.  ...  Considering that object valence was expressed last, one may expect that the temporal expression of assemblies may reflect planning and decision processes, which ultimately require the recall of object-reward  ... 
doi:10.1016/j.neuron.2014.06.013 pmid:24991951 fatcat:b2rfnxjm75atveayld3h2e437u

Crossmodal Attentive Skill Learner [article]

Shayegan Omidshafiei, Dong-Ki Kim, Jason Pazis, Jonathan P. How
2018 arXiv   pre-print
., 2017] to enable hierarchical reinforcement learning across multiple sensory inputs.  ...  Temporal abstraction enables exploitation of domain regularities to provide the agent hierarchical guidance in the form of options or sub-goals [21, 40] .  ...  "non-hierarchical" in the table.  ... 
arXiv:1711.10314v3 fatcat:jjj4uyxlsjarbpvngwdpkmslze
« Previous Showing results 1 — 15 out of 24,630 results