Filters








142,434 Hits in 5.4 sec

A Robust Approach for Continuous Interactive Actor-Critic Algorithms

Cristian Millan-Arias, Bruno Fernandes, Francisco Cruz, Richard Dazeley, Sergio Fernandes
2021 IEEE Access  
INDEX TERMS Continuous interactive reinforcement learning, interactive robust reinforcement learning, reinforcement learning, robust reinforcement learning. FIGURE 13.  ...  Other approaches, such as robust reinforcement learning, allow the agent to learn the task, acting in a disturbed environment.  ...  INTERACTIVE REINFORCEMENT LEARNING APPROACH FOR CONTINUOUS ACTION SPACE AND DYNAMIC ENVIRONMENTS In this section, we describe Interactive Reinforcement Learning for continuous spaces.  ... 
doi:10.1109/access.2021.3099071 fatcat:qqp547i2r5ajrhvltfdnkovsne

Guest Editorial Introduction to the Special Issue of the IEEE L-CSS on Learning and Control

Giovanni Cherubini, Martin Guay, Sophie Tarbouriech, Kartik Ariyur, Mireille E. Broucke, Subhrakanti Dey, Christian Ebenbauer, Paolo Frasca, Bahman Gharesifard, Antoine Girard, Joao Manoel Gomes da Silva, Lars Grune (+5 others)
2020 IEEE Control Systems Letters  
"Chance-Constrained Control With Lexicographic Deep Reinforcement Learning": Giuseppi and Pietrabissa introduce a lexicographic approach to deep reinforcement learning for chance-constrained control, where  ...  The trend towards a higher level of interaction between the fields of control theory and machine learning, targeting applications to dynamical systems, will continue, leading to further innovations.  ... 
doi:10.1109/lcsys.2020.2986590 fatcat:wx42r4h6ond3dkjntcwsdrmojy

From Multi-agent to Multi-robot: A Scalable Training and Evaluation Platform for Multi-robot Reinforcement Learning [article]

Zhiuxan Liang, Jiannong Cao, Shan Jiang, Divya Saxena, Jinlin Chen, Huafeng Xu
2022 arXiv   pre-print
This paper introduces a scalable emulation platform for multi-robot reinforcement learning (MRRL) called SMART to meet this need.  ...  Precisely, SMART consists of two components: 1) a simulation environment that provides a variety of complex interaction scenarios for training and 2) a real-world multi-robot system for realistic performance  ...  Furthermore, we propose three different reinforcement learning-based approaches to learn a cooperative policy for high-level action selection.  ... 
arXiv:2206.09590v1 fatcat:v4axkokrurhlhonipgqz6hcerq

Safe adaptation in multiagent competition [article]

Macheng Shen, Jonathan P. How
2022 arXiv   pre-print
the robustness of the ego-agent's policy.  ...  In multiagent competitive scenarios, agents may have to adapt to new opponents with previously unseen behaviors by learning from the interaction experiences between the ego-agent and the opponent.  ...  with continuous dynamics, within the model-free reinforcement learning setting.  ... 
arXiv:2203.07562v1 fatcat:jc23hg567jerboxajlqcaz74pu

Robust Knowledge Adaptation for Dynamic Graph Neural Networks [article]

Hanjie Li, Changsheng Li, Kaituo Feng, Ye Yuan, Guoren Wang, Hongyuan Zha
2022 arXiv   pre-print
In this paper, we propose AdaNet: a robust knowledge Adaptation framework via reinforcement learning for dynamic graph neural Networks.  ...  To the best of our knowledge, our approach constitutes the first attempt to explore robust knowledge adaptation via reinforcement learning for dynamic graph neural networks.  ...  To address these challenges, we propose AdaNet: a reinforcement learning based robust knowledge Adaptation framework for dynamic graph neural Networks.  ... 
arXiv:2207.10839v1 fatcat:hj6rmxcbxzfmrdbmzr4jiiwzc4

Learning to Explore in Motion and Interaction Tasks [article]

Miroslav Bogdanovic, Ludovic Righetti
2019 arXiv   pre-print
In this paper we present a novel approach for efficient exploration that leverages previously learned tasks.  ...  The approach also enables continuous learning of improved exploration strategies as novel tasks are learned.  ...  CONCLUSION In this paper, we presented a novel approach to learn an exploration process for reinforcement learning using previously learned tasks.  ... 
arXiv:1908.03731v1 fatcat:fdukvgxfvrainnjm3u2vf6re64

A REINFORCEMENT LEARNING ALGORITHM WITH EVOLVING FUZZY NEURAL NETWORKS

Hitesh Shah, M. Gopal
2014 IFAC Proceedings Volumes  
In this paper, a novel on-line sequential learning evolving neuro-fuzzy model design for RL is proposed.  ...  Simulation results have demonstrated that the proposed approach performs well in reinforcement learning problems.  ...  To analyze the DENFIS algorithm for computational cost, accuracy, and robustness, we compare the proposed approach with dynamic fuzzy reinforcement learning approach.  ... 
doi:10.3182/20140313-3-in-3024.00058 fatcat:y4ptlzdqkrfshhal7hwu4biwbu

Godot Reinforcement Learning Agents [article]

Edward Beeching, Jilles Debangoye, Olivier Simonin, Christian Wolf
2021 arXiv   pre-print
We present Godot Reinforcement Learning (RL) Agents, an open-source interface for developing environments and agents in the Godot Game Engine.  ...  We provide a standard Gym interface, with wrappers for learning in the Ray RLlib and Stable Baselines RL frameworks.  ...  Deep Reinforcement Learning Reinforcement Learning approaches provide the ability to learn in sequential decision making problems, where the objective is to maximize accumulated reward.  ... 
arXiv:2112.03636v1 fatcat:ekc7xvtmdzddjaklg5ffumxtde

Robotic Learning from Advisory and Adversarial Interactions using a Soft Wrist

Masashi Hamaya, Kazutoshi Tanaka, Yoshiya Shibata, Felix Wolf Hans Erich Von Drigalski, Chisato Nakashima, Yoshihisa Ijiri
2021 IEEE Robotics and Automation Letters  
To address this problem, we propose formulating this as a model-based reinforcement learning problem to reduce errors during training and increase robustness.  ...  Index Terms-Physical human-robot interaction, reinforcement learning for robotic control, soft robot applications Manuscript  ...  Finally, we apply a model-based reinforcement learning approach. A. Problem Formulation We formulated this task as model-based reinforcement learning.  ... 
doi:10.1109/lra.2021.3067232 fatcat:274y5xachbb2jibks26f7jqaqm

Overcoming Model Bias for Robust Offline Deep Reinforcement Learning [article]

Phillip Swazinna, Steffen Udluft, Thomas Runkler
2021 arXiv   pre-print
State-of-the-art reinforcement learning algorithms mostly rely on being allowed to directly interact with their environment to collect millions of observations.  ...  However, the robustness of the training process is still comparatively low, a problem known from methods using value functions.  ...  Acknowledgements The project this paper is based on was supported with funds from the German Federal Ministry of Education and Research under project number 01 IS 18049 A.  ... 
arXiv:2008.05533v4 fatcat:r2dw73ki7jdklarbxtsjvxdd2e

Data-driven Approaches for Formal Synthesis of Dynamical Systems

Milad Kazemi
2022 International Joint Conference on Autonomous Agents & Multiagent Systems  
Moreover, in my research I provide correctness for satisfying specifications using different approaches including abstraction-based techniques, game-theoretic techniques, and model-free reinforcement learning  ...  In this way, I analyze the satisfaction of properties in both episodic and continual settings.  ...  In [3] , I introduced a novel reinforcement learning (RL) scheme to synthesize policies for networks of continuous-space stochastic control systems with unknown dynamics.  ... 
dblp:conf/atal/Kazemi22 fatcat:2ofgtrd46vanflj2hmfbh3vmp4

From Semantics to Execution: Integrating Action Planning With Reinforcement Learning for Robotic Causal Problem-Solving

Manfred Eppe, Phuong D H Nguyen, Stefan Wermter
2019 Frontiers in Robotics and AI  
A problem with the integration of both approaches is that action planning is based on discrete high-level action- and state spaces, whereas reinforcement learning is usually driven by a continuous reward  ...  Reinforcement learning is generally accepted to be an appropriate and successful method to learn robot control.  ...  We also thank Andrew Levy for providing the code of his hierarchical actorcritic reinforcement learning approach (Levy et al., 2019) .  ... 
doi:10.3389/frobt.2019.00123 pmid:33501138 pmcid:PMC7805615 fatcat:zfuealfbgfce7jedo5e27yvtsu

Robust Adversarial Reinforcement Learning [article]

Lerrel Pinto, James Davidson, Rahul Sukthankar, Abhinav Gupta
2017 arXiv   pre-print
However, most current RL-based approaches fail to generalize since: (a) the gap between simulation and real world is so large that policy-learning approaches fail to transfer; (b) even if policy learning  ...  This paper proposes the idea of robust adversarial reinforcement learning (RARL), where we train an agent to operate in the presence of a destabilizing adversary that applies disturbance forces to the  ...  Standard reinforcement learning on MDPs In this paper we examine continuous space MDPs that are represented by the tuple: (S, A, P, r, γ, s 0 ), where S is a set of continuous states and A is a set of  ... 
arXiv:1703.02702v1 fatcat:rfpsswxagnftblyfaf5nv6khm4

Deep Adversarial Reinforcement Learning for Object Disentangling [article]

Melvin Laux, Oleg Arenz, Jan Peters, Joni Pajarinen
2021 arXiv   pre-print
To solve this problem, we present a novel adversarial reinforcement learning (ARL) framework.  ...  Deep learning in combination with improved training techniques and high computational power has led to recent advances in the field of reinforcement learning (RL) and to successful robotic RL applications  ...  We present a novel adversarial learning framework for reinforcement learning algorithms: Adversarial reinforcement learning (ARL).  ... 
arXiv:2003.03779v2 fatcat:ekewmbpqi5bitiw2y34tkppnui

Robotic self-representation improves manipulation skills and transfer learning [article]

Phuong D.H. Nguyen, Manfred Eppe, Stefan Wermter
2020 arXiv   pre-print
However, there is a lack of computational methods that relate this claim to cognitively plausible robots and reinforcement learning.  ...  Cognitive science suggests that the self-representation is critical for learning and problem-solving.  ...  ACKNOWLEDGMENT We thank Nicolas Frick for the help on the NICOL design and part of the NICOL simulation used in this paper.  ... 
arXiv:2011.06985v1 fatcat:5kyven7c75fevfwpyux7lv5lfq
« Previous Showing results 1 — 15 out of 142,434 results