A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Filters
Stochastic Grounded Action Transformation for Robot Learning in Simulation
[article]
2020
arXiv
pre-print
In response, we introduce the Stochastic Grounded Action Transformation(SGAT) algorithm,which models this stochasticity when grounding the simulator. ...
Robot control policies learned in simulation do not often transfer well to the real world. ...
STOCHASTIC GROUNDED ACTION TRANSFORMATION (SGAT) To address real world stochasticity, we introduce Stochastic Grounded Action Transformation (SGAT), which learns a stochastic model of the forward dynamics ...
arXiv:2008.01281v1
fatcat:cjrhkrlkgjca7kl6in6i4dgvcu
Stochastic Grounded Action Transformation for Robot Learning in Simulation
[article]
2020
In response, we introduce the Stochastic Grounded Action Transformation (SGAT) algorithm, which models this stochasticity when grounding the simulator. ...
Robot control policies learned in simulation do not often transfer well to the real world. ...
This work has taken place in the Learning Agents Research Group (LARG) at UT Austin. ...
doi:10.26153/tsw/10201
fatcat:ncw5gfe5fjezhgmsfa4wrlrjli
Grounded action transformation for sim-to-real reinforcement learning
2021
Machine Learning
This article introduces a new algorithm for gsl—Grounded Action Transformation (GAT)—and applies it to learning control policies for a humanoid robot. ...
Our results contribute to a deeper understanding of grounded simulation learning and demonstrate its effectiveness for applying reinforcement learning to learn robot control policies entirely in simulation ...
Funding This work has taken place in the Learning Agents Research Group (LARG) at the Artificial Intelligence Laboratory, The University of Texas at Austin. ...
doi:10.1007/s10994-021-05982-z
fatcat:cednowwd7vbl7hjdszowmeccse
Realeasy: Real-Time capable Simulation to Reality Domain Adaptation
2021
2021 IEEE 17th International Conference on Automation Science and Engineering (CASE)
Realistic simulations of sensor readings are particularly important for real time robot control laws and for data intensive Reinforcement Learning of robot movements in simulation. ...
We address the problem of insufficient quality of robot simulators to produce precise sensor readings for joint positions, velocities and torques. ...
ACKNOWLEDGMENT We thank Todor Stoyanov for help with the robot setup and evaluation of our approach on another Panda individual. We thank Matthias C. Mayr, Mathias Haage, Björn Olofsson, Johannes A. ...
doi:10.1109/case49439.2021.9551626
fatcat:tegn2mejdjhlzgogo7byqjbttm
Robot learning with a spatial, temporal, and causal and-or graph
2016
2016 IEEE International Conference on Robotics and Automation (ICRA)
We propose a stochastic graph-based framework for a robot to understand tasks from human demonstrations and perform them with feedback control. ...
The learning system can watch human demonstrations, generalize learned concepts, and perform tasks in new environments, across different robotic platforms. ...
In addition, we would like to thank SRI International and OSRF for their support. ...
doi:10.1109/icra.2016.7487364
dblp:conf/icra/XiongSXZ16
fatcat:agdxx2bwbvcf5l43dkldvmt42m
Learning with Stochastic Guidance for Navigation
[article]
2018
arXiv
pre-print
In this paper, we demonstrate the power of the framework in a navigation task, where the robot can dynamically choose to learn through exploration, or to use the output of a heuristic controller as guidance ...
The stochastic switch can be jointly trained with the original DDPG in the same framework. ...
Navigation in Simulated Environment Reinforcement Learning with Stochastic Guidance Fig.3(a) compares the models for demonstrating the benefits brought by learning with stochastic switch. ...
arXiv:1811.10756v1
fatcat:5ddfdzc5mbh2rfkygbrlb46pey
Learning for Multi-robot Cooperation in Partially Observable Stochastic Environments with Macro-actions
[article]
2017
arXiv
pre-print
This paper presents a data-driven approach for multi-robot coordination in partially-observable domains based on Decentralized Partially Observable Markov Decision Processes (Dec-POMDPs) and macro-actions ...
to cooperate in a partially observable stochastic environment. ...
robots to cooperate in a partially observable stochastic environment. ...
arXiv:1707.07399v2
fatcat:g53gsvb7nvhttoxh6dg5sna5re
Explainable robotic systems: Understanding goal-driven actions in a reinforcement learning scenario
[article]
2020
arXiv
pre-print
To increase action understanding, users demand more explainability about the decisions by the robot in particular situations. ...
In this work, we focus on the decision-making process of a reinforcement learning agent performing a simple navigation task in a robotic scenario. ...
The simulated scenario is shown in Fig. 2 using the CoppeliaSim robot simulator [44] . Furthermore, two variations are considered for the proposed Figure 2 : The simulated robot navigation task. ...
arXiv:2006.13615v2
fatcat:hvei7t3p5bfjtopjwxo4qcoyim
Learning to Imagine Manipulation Goals for Robot Task Planning
[article]
2017
arXiv
pre-print
In this work, we propose a method for learning a model encoding just such a representation for task planning. ...
We learn a neural net that encodes the k most likely outcomes from high level actions from a given world. ...
make such learning feasible in a stochastic world. • Experimental results from a simulated navigation and a block-stacking domain. ...
arXiv:1711.02783v2
fatcat:o5aidaggtbecvc3yb7h3jz332a
Learning for multi-robot cooperation in partially observable stochastic environments with macro-actions
2017
2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
This paper presents a data-driven approach for multi-robot coordination in partially-observable domains based on Decentralized Partially Observable Markov Decision Processes (Dec-POMDPs) and macro-actions ...
to cooperate in a partially observable stochastic environment. ...
robots to cooperate in a partially observable stochastic environment. ...
doi:10.1109/iros.2017.8206001
dblp:conf/iros/LiuSOAH17
fatcat:gsamvt7tfrhrhhxigocaa2qo5q
Online learning and integration of complex action and word lexicons for language grounding
2012
2012 IEEE International Conference on Development and Learning and Epigenetic Robotics (ICDL)
These sensory representations are paired with a basic model of association, allowing for the grounding of linguistic symbols directly in action knowledge -a grounding which is then exploited to bootstrap ...
This paper presents a computational framework for the development and integration of action and language capabilities through symbol grounding. ...
Another popular class of approaches for online learning are stochastic gradient-based techniques [15] . ...
doi:10.1109/devlrn.2012.6400812
dblp:conf/icdl-epirob/NiehausL12
fatcat:uilx7g3bynawpofforug7nwc3i
Gait Balance and Acceleration of a Biped Robot Based on Q-Learning
2016
IEEE Access
The simulation shows that the proposed method can allow the robot to learn to improve its behavior in terms of walking speed. ...
The learning architecture developed is aimed to solve complex control problems in robotic actuation control by mapping the action space from a discretized domain to a continuous one. ...
In these simulations, the robot runs for 2500 episodes and 100 steps in an episode. ...
doi:10.1109/access.2016.2570255
fatcat:t2mbwefsrbcuzj4bxe2tbhvor4
Learning Locomotion Skills for Cassie: Iterative Design and Sim-to-Real
2019
Conference on Robot Learning
We demonstrate the transfer of policies learned in simulation to the physical robot without dynamics randomization. ...
Throughout the process, transfer learning is achieved via Deterministic Action Stochastic State (DASS) tuples, representing the deterministic policy actions associated with states visited by the stochastic ...
Acknowledgments This research was funded in part by NSERC RGPIN-2015-04843 and NSF award 1849343 S&AS:INT:Learning and Planning for Dynamic Locomotion. ...
dblp:conf/corl/XieCDMHP19
fatcat:ev3cpaosdrd5bmqscqf7r47evu
Learning a Structured Neural Network Policy for a Hopping Task
2018
IEEE Robotics and Automation Letters
Finally, we show that the learned policy can be robustly transferred on a real robot. ...
We learn the contact-rich dynamics for our underactuated systems along these trajectories in a sample efficient manner. ...
learning systems. ...
doi:10.1109/lra.2018.2861466
dblp:journals/ral/ViereckKHR18
fatcat:o6w5xqal3vhfbhtu7pbv5pmkea
Mobile Robot Applications Grounded in Deep Learning Theories: A Review
2017
International Robotics & Automation Journal
The ambition of this survey is to present a global overview of deliberation functions in mobile robots and to discuss the state of the art in deep learning theories. ...
Mobile robots facing a diversity of open environments and performing a variety of tasks and interactions need explicit deliberation in order to fulfill their missions. ...
Conclusion In this survey, the utility of the deep reinforcement learning framework for the mobile robot exploration is discussed. ...
doi:10.15406/iratj.2017.03.00067
fatcat:2k2z7xqtinfhrca573uyddtfea
« Previous
Showing results 1 — 15 out of 6,275 results