832 Hits in 4.1 sec

CURIOUS: Intrinsically Motivated Modular Multi-Goal Reinforcement Learning [article]

Cédric Colas, Pierre Fournier, Olivier Sigaud, Mohamed Chetouani, Pierre-Yves Oudeyer
2019 arXiv   pre-print
In open-ended environments, autonomous learning agents must set their own goals and build their own curriculum through an intrinsically motivated exploration.  ...  This paper proposes CURIOUS, an algorithm that leverages 1) a modular Universal Value Function Approximator with hindsight learning to achieve a diversity of goals of different kinds within a unique policy  ...  Here we present CURIOUS 1 , a multi-task and multi-goal reinforcement learning (RL) algorithm that uses intrinsic motivations to efficiently learn a finite set of multi-goal tasks in parallel.  ... 
arXiv:1810.06284v4 fatcat:ivksmu4qr5hqvnnjvyubbvsxjq

Is Curiosity All You Need? On the Utility of Emergent Behaviours from Curious Exploration [article]

Oliver Groth, Markus Wulfmeier, Giulia Vezzani, Vibhavari Dasagi, Tim Hertweck, Roland Hafner, Nicolas Heess, Martin Riedmiller
2021 arXiv   pre-print
Instead, we propose to shift the focus towards retaining the behaviours which emerge during curiosity-based learning.  ...  However, as the agent learns to reach previously unexplored spaces and the objective adapts to reward new areas, many behaviours emerge only to disappear due to being overwritten by the constantly shifting  ...  in unsupervised reinforcement learning.  ... 
arXiv:2109.08603v1 fatcat:umza7yt35vbdvfdz3agr3v4dre

Toward Computational Motivation for Multi-Agent Systems and Swarms

Md Mohiuddin Khan, Kathryn Kasmarik, Michael Barlow
2018 Frontiers in Robotics and AI  
However, there are only a few works that focus on motivation theories in multi-agent or swarm settings.  ...  Computer scientists have proposed various computational models of motivation for artificial agents, with the aim of building artificial agents capable of autonomous goal generation.  ...  Barto (2013) provides an overview of intrinsic motivation with regard to Reinforcement Learning (RL).  ... 
doi:10.3389/frobt.2018.00134 pmid:33501012 pmcid:PMC7806096 fatcat:afbsmnodtbdgvhgyzsbykdwniu

Functions and Mechanisms of Intrinsic Motivations [chapter]

Marco Mirolli, Gianluca Baldassarre
2012 Intrinsically Motivated Learning in Natural and Artificial Systems  
Different kinds of intrinsic motivations have been proposed both in psychology and in machine learning and robotics: some are based on the knowledge of the learning system, while others are based on its  ...  This capacity crucially depends on the presence of intrinsic motivations, i.e. motivations that are not directly related to an organism's survival and reproduction but rather to its ability to learn.  ...  Intrinsic motivations and skills accumulation Hierarchy and modularity of skill organization A multi-task learning system seems to need two key ingredients: (a) some form of structural modularity, where  ... 
doi:10.1007/978-3-642-32375-1_3 fatcat:27i2g3ddvjdlxproqex2q5pow4

A Novel Approach to Curiosity and Explainable Reinforcement Learning via Interpretable Sub-Goals [article]

Connor van Rossum, Candice Feinberg, Adam Abu Shumays, Kyle Baxter, Benedek Bartha
2021 arXiv   pre-print
Two key challenges within Reinforcement Learning involve improving (a) agent learning within environments with sparse extrinsic rewards and (b) the explainability of agent actions.  ...  We describe a curious subgoal focused agent to address both these challenges.  ...  Adversarially Motivated Intrinsic Goals (AMIGo) [18] sees a teacher network generate goal positions for a student network to learn to reach.  ... 
arXiv:2104.06630v2 fatcat:bfcplwdg2zhwdnwkunaou5ysmi

Autonomous learning of abstractions using Curiosity-Driven Modular Incremental Slow Feature Analysis

Varun Raj Kompella, Matthew Luciw, Marijn Stollenga, Leo Pape, Jurgen Schmidhuber
2012 2012 IEEE International Conference on Development and Learning and Epigenetic Robotics (ICDL)  
We propose a modular, curiosity-driven learning system that autonomously learns multiple abstract representations.  ...  The policy to build the library of abstractions is adapted through reinforcement learning, and the corresponding abstractions are learned through incremental slow-feature analysis (IncSFA).  ...  The learning progress of the encoder becomes an intrinsic reward for the reinforcement learner [14] . The learning module in Curious Dr.  ... 
doi:10.1109/devlrn.2012.6400829 dblp:conf/icdl-epirob/KompellaLSPS12 fatcat:r7qvmwvamfdvznjytw4kcqbgcy

Curiosity driven reinforcement learning for motion planning on humanoids

Mikhail Frank, Jürgen Leitner, Marijn Stollenga, Alexander Förster, Jürgen Schmidhuber
2014 Frontiers in Neurorobotics  
Most previous work on artificial curiosity (AC) and intrinsic motivation focuses on basic concepts and theory.  ...  Our novel reinforcement learning (RL) framework consists of a state-of-the-art, low-level, reactive control layer, which controls the iCub while respecting constraints, and a high-level curious agent,  ...  The modular, parallel, multi-agent configuration of this second experiment is designed to address the question: "Can curious MDP planners scale to intelligently control the entire iCub robot?"  ... 
doi:10.3389/fnbot.2013.00025 pmid:24432001 pmcid:PMC3881010 fatcat:y5hhoqmkgnhnrn6klmum4esfie

Intrinsically Motivated Goal Exploration Processes with Automatic Curriculum Learning [article]

Sébastien Forestier, Rémy Portelas, Yoan Mollard, Pierre-Yves Oudeyer
2022 arXiv   pre-print
Intrinsically motivated spontaneous exploration is a key enabler of autonomous developmental learning in human children.  ...  We present an algorithmic approach called Intrinsically Motivated Goal Exploration Processes (IMGEP) to enable similar properties of autonomous learning in machines.  ...  This is a particular implementation of the IMGEP architecture using reinforcement learning techniques, using a unique monolithic (multi-task multi-goal) policy network, that learns from on a replay buffer  ... 
arXiv:1708.02190v3 fatcat:5q7l254xezbjngaoocghg66chu

Intrinsically Motivated Acquisition of Modular Slow Features for Humanoids in Continuous and Non-Stationary Environments [article]

Varun Raj Kompella, Laurenz Wiskott
2017 arXiv   pre-print
Therefore, learning multiple sets of spatially or temporally local, modular abstractions of the inputs would be beneficial. How can a robot learn these local abstractions without a teacher?  ...  The former is used to make the robot self-motivated to explore by rewarding itself whenever it makes progress learning an abstraction; the later is used to update the abstraction by extracting slowly varying  ...  In the absence of external supervision, how can the agent be motivated to learn these abstractions? The agent would need to be intrinsically motivated.  ... 
arXiv:1701.04663v1 fatcat:32heszx3mzdr7ko6fvutjissc4

An intrinsic value system for developing multiple invariant representations with incremental slowness learning

Matthew Luciw, Varun Kompella, Sohrob Kazerounian, Juergen Schmidhuber
2013 Frontiers in Neurorobotics  
Curiosity Driven Modular Incremental Slow Feature Analysis (CD-MISFA;) is a recently introduced model of intrinsically-motivated invariance learning.  ...  CD-MISFA combines 1. unsupervised representation learning through the slowness principle, 2. generation of an intrinsic reward signal through learning progress of the developing features, and 3. balancing  ...  Learning II).  ... 
doi:10.3389/fnbot.2013.00009 pmid:23755011 pmcid:PMC3667249 fatcat:xvdfkdedifd3djqy263nvw6y3y

A Survey of Exploration Methods in Reinforcement Learning [article]

Susan Amin, Maziar Gomrokchi, Harsh Satija, Herke van Hoof, Doina Precup
2021 arXiv   pre-print
Reinforcement learning agents depend crucially on exploration to obtain informative data for the learning process as the lack of enough information could hinder effective learning.  ...  Exploration is an essential component of reinforcement learning algorithms, where agents need to learn how to predict and control unknown and often stochastic environments.  ...  can be added to the curious reinforcement.  ... 
arXiv:2109.00157v2 fatcat:dlqhzwxscnfbxpt2i6rp7ovp6i

The strategic student approach for life-long exploration and learning

Manuel Lopes, Pierre-Yves Oudeyer
2012 2012 IEEE International Conference on Development and Learning and Epigenetic Robotics (ICDL)  
Recent development from machine learning, mainly from active learning and multi-armed bandits, started to contribute to a formal view on the complexity of learning agents that choose their own samples:  ...  Then, we show an algorithm, based on multi-armed bandit techniques, that allows empirical online evaluation of learning progress and approximates the optimal solution under more general conditions.  ...  This algorithm was motivated from wellestablished algorithms from the adversarial multi-armed bandit setting [13] , [29] , as well as from experimental investigations of intrinsic motivation systems  ... 
doi:10.1109/devlrn.2012.6400807 dblp:conf/icdl-epirob/LopesO12 fatcat:tftcf42uqnhnfc4w6oh2jm33iq

Semantic Curiosity for Active Visual Learning [article]

Devendra Singh Chaplot, Helen Jiang, Saurabh Gupta, Abhinav Gupta
2020 arXiv   pre-print
Given a set of environments (and some labeling budget), our goal is to learn an object detector by having an agent select what data to obtain labels for.  ...  In this paper, we study the task of embodied interactive learning for object detection.  ...  Inspired by recent work in intrinsic motivation and curiosity for training policies without external rewards [37, 38] , we propose a new intrinsic reward called semantic curiosity that can be used for  ... 
arXiv:2006.09367v1 fatcat:gw7hu3rje5cjppw7tfkhyxj3lq

Building Intelligent Autonomous Navigation Agents [article]

Devendra Singh Chaplot
2021 arXiv   pre-print
In the second part, we present a new class of navigation methods based on modular learning and structured explicit map representations, which leverage the strengths of both classical and end-to-end learning  ...  The goal of this thesis is to make progress towards designing algorithms capable of 'physical intelligence', i.e. building intelligent autonomous navigation agents capable of learning to perform complex  ...  of multi-task and zero-shot reinforcement learning.  ... 
arXiv:2106.13415v1 fatcat:5x5g64rd2rfvnmttcz5y7qvium

An evolutionary cognitive architecture made of a bag of networks

Alexander W. Churchill, Chrisantha Fernando
2014 Evolutionary Intelligence  
A cognitive architecture is presented for modelling some properties of sensorimotor learning in infants, namely the ability to accumulate adaptations and skills over multiple tasks in a manner which allows  ...  The nodes used consist of dynamical systems such as dynamic movement primitives, continuous time recurrent neural networks and high-level supervised and unsupervised learning algorithms.  ...  However, in all these examples, the goals or tasks are set by the external user; even curious robots' goals are set, i.e. go and learn.  ... 
doi:10.1007/s12065-014-0121-7 fatcat:5pp7z22psrhrlcmiqu6vavwqnq
« Previous Showing results 1 — 15 out of 832 results