Filters








37 Hits in 3.2 sec

Deep Reinforcement Learning for Tensegrity Robot Locomotion [article]

Marvin Zhang, Xinyang Geng, Jonathan Bruce, Ken Caluwaerts, Massimo Vespignani, Vytas SunSpiral, Pieter Abbeel, Sergey Levine
2017 arXiv   pre-print
the effectiveness of our approach on tensegrity robot locomotion.  ...  We compare the learned feedback policies to learned open-loop policies and hand-engineered controllers, and demonstrate that the learned policy enables the first continuous, reliable locomotion gait for  ...  We appreciate the support, ideas, and feedback from members of the Berkeley Artificial Intelligence Research Lab and the Dynamic Tensegrity Robotics Lab.  ... 
arXiv:1609.09049v3 fatcat:3xpdqdabgjbx5krke35kbxpnxu

Inclined Surface Locomotion Strategies for Spherical Tensegrity Robots [article]

Lee-Huang Chen, Brian Cera, Edward L. Zhu, Riley Edmunds, Franklin Rice, Antonia Bronars, Ellande Tang, Saunon R. Malekshahi, Osvaldo Romero, Adrian K. Agogino, Alice M. Agogino
2017 arXiv   pre-print
This paper presents a new teleoperated spherical tensegrity robot capable of performing locomotion on steep inclined surfaces.  ...  This robot is an improvement over other iterations in the TT-series and the first tensegrity to achieve reliable locomotion on inclined surfaces of up to 24\degree.  ...  ACKNOWLEDGEMENT The authors are grateful for funding support from NASA's Early Stage Innovation grant NNX15AD74G.  ... 
arXiv:1708.08150v1 fatcat:h5ivccieanhkrafjbpo3kgwwti

Adaptive and Resilient Soft Tensegrity Robots [article]

John Rieffel, Jean-Baptiste Mouret
2018 arXiv   pre-print
This manuscript describes an easy-to-assemble tensegrity-based soft robot capable of highly dynamic locomotive gaits and demonstrating structural and behavioral resilience in the face of physical damage  ...  The nature of soft materials, however, presents considerable challenges to aspects of design, construction, and control -- and up until now, the vast majority of gaits for soft robots have been hand-designed  ...  The authors would also like to thank Bill Keat for his help and insight into the design of the robot, and all the undergraduates of Union College's Evolutionary Robotics Group. The pictures of Fig.1  ... 
arXiv:1702.03258v2 fatcat:cx7myvckrrdm5heykluaq5ktmu

Adaptive and Resilient Soft Tensegrity Robots

John Rieffel, Jean-Baptiste Mouret
2018 Soft Robotics  
This article describes an easy-toassemble tensegrity-based soft robot capable of highly dynamic locomotive gaits and demonstrating structural and behavioral resilience in the face of physical damage.  ...  The nature of soft materials, however, presents considerable challenges to aspects of design, construction, and control-and up until now, the vast majority of gaits for soft robots have been hand-designed  ...  The authors also thank Bill Keat for his help and insight into the design of the robot, and all the undergraduates of Union College's Evolutionary Robotics Group. Computer code  ... 
doi:10.1089/soro.2017.0066 pmid:29664708 pmcid:PMC6001847 fatcat:cyfpgadib5hylca6cc56amkcby

Adaptive Tensegrity Locomotion on Rough Terrain via Reinforcement Learning [article]

David Surovik, Kun Wang, Kostas E. Bekris
2018 arXiv   pre-print
Guided Policy Search (GPS), a sample-efficient and model-free hybrid framework for optimization and reinforcement learning, has recently been used to produce periodic locomotion for a spherical 6-bar tensegrity  ...  The dynamical properties of tensegrity robots give them appealing ruggedness and adaptability, but present major challenges with respect to locomotion control.  ...  Contributions and Outline This paper extends the line of work on adapting and employing the GPS reinforcement learning framework for any-axis planar locomotion with a tensegrity robot [20] , as an example  ... 
arXiv:1809.10710v1 fatcat:d2c5wafmrvb25ho6makdmy5x7i

Recent Advances in Deep Reinforcement Learning Applications for Solving Partially Observable Markov Decision Processes (POMDP) Problems: Part 1—Fundamentals and Applications in Games, Robotics and Natural Language Processing

Xuanchen Xiang, Simon Foo
2021 Machine Learning and Knowledge Extraction  
The first part of a two-part series of papers provides a survey on recent advances in Deep Reinforcement Learning (DRL) applications for solving partially observable Markov decision processes (POMDP) problems  ...  In this overview, we introduce Markov Decision Processes (MDP) problems and Reinforcement Learning and applications of DRL for solving POMDP problems in games, robotics, and natural language processing  ...  With their unique properties, tensegrity robots are appealing for planetary exploration rovers. The primary locomotion of tensegrity robots is rolling. Zhang et al.  ... 
doi:10.3390/make3030029 fatcat:u3y7bqkoljac5not2eq5konnnm

2020 Index IEEE Robotics and Automation Letters Vol. 5

2020 IEEE Robotics and Automation Letters  
., +, LRA April 2020 867-874 Invariant Transform Experience Replay: Data Augmentation for Deep Reinforcement Learning.  ...  ., +, LRA July 2020 4399-4406 Invariant Transform Experience Replay: Data Augmentation for Deep Reinforcement Learning.  ... 
doi:10.1109/lra.2020.3032821 fatcat:qrnouccm7jb47ipq6w3erf3cja

Table of Contents

2020 IEEE Robotics and Automation Letters  
Skelton 1239 High-Speed Autonomous Drifting With Deep Reinforcement Learning . . . . . . . .P. Cai, X. Mei, L. Tai, Y. Sun, and M.  ...  Sun 1167 Deep Reinforcement Learning for Instruction Following Visual Navigation in 3D Maze-Like Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  ... 
doi:10.1109/lra.2020.2987582 fatcat:3qafzip5xrg5jliyngq4xxvjha

A First Principles Approach for Data-Efficient System Identification of Spring-Rod Systems via Differentiable Physics Engines [article]

Kun Wang, Mridul Aanjaneya, Kostas Bekris
2020 arXiv   pre-print
We further reduce the dimension from 3D to 1D for each module, which allows efficient learning of system parameters using linear regression.  ...  Unlike black-box data-driven methods for learning the evolution of a dynamical system and its parameters, we modularize the design of our engine using a discrete form of the governing equations of motion  ...  Adaptive Tensegrity Locomotion: Controlling a Compliant Icosahedron with Symmetry-Reduced Reinforcement Learning. Inter- national Journal of Robotics Research (IJRR), 2019.  ... 
arXiv:2004.13859v1 fatcat:q2rxiudpyfdpzo5o2xcuxqthw4

Model-Predictive Control with Inverse Statics Optimization for Tensegrity Spine Robots [article]

Andrew P. Sabelhaus, Huajing Zhao, Edward L. Zhu, Adrian K. Agogino, Alice M. Agogino
2019 arXiv   pre-print
This work presents two controllers for tensegrity spine robots, using model-predictive control (MPC) and inverse statics optimization.  ...  The second uses a new inverse statics optimization algorithm, which gives the first feasible solutions to the problem for certain tensegrity robots, to generate reference input trajectories in combination  ...  Ahmad, and Vytas SunSpiral for their contributions to the earlier conference version of this paper.  ... 
arXiv:1806.08868v2 fatcat:qvco46sbjffztagi4xldn5dgha

A robotic ecosystem with evolvable minds and bodies

Berend Weel, Emanuele Crosato, Jacqueline Heinerman, Evert Haasdijk, A.E. Eiben
2014 2014 IEEE International Conference on Evolvable Systems  
This paper presents a proof of concept demonstration of a novel evolutionary robotic system where robots can self-reproduce.  ...  Our system can be perceived as an Artificial Life habitat, where robots with evolvable bodies and minds live in an arena and actively induce an evolutionary process 'from within', without a central evolutionary  ...  The second research objective is to investigate whether the robots, endowed with reinforcement learning capabilities, learn to locomote efficiently during their lifetime and how this learning ability evolves  ... 
doi:10.1109/ices.2014.7008736 dblp:conf/ices/WeelCHHE14 fatcat:tl5p4rp7gzh6dftywgjlwfq4qi

Table of Contents

2020 2020 IEEE Symposium Series on Computational Intelligence (SSCI)  
in Deep Reinforcement Learning for Robotics: a Survey Composing Algorithm Portfolio with Problem Set of Unknown Distribution Wenwen Liu, Shiu Yin Yuen and Chi Wan Sung .......... 814 Discovering Action  ...  of Adaptive Control on Learning Directed Locomotion Fuda van Diggelen, Robert Babuska and Aguston E.  ... 
doi:10.1109/ssci47803.2020.9308155 fatcat:hyargfnk4vevpnooatlovxm4li

Morphological Properties of Mass–Spring Networks for Optimal Locomotion Learning

Gabriel Urbain, Jonas Degrave, Benonie Carette, Joni Dambre, Francis Wyffels
2017 Frontiers in Neurorobotics  
This can significantly simplify the additional resources required for locomotion control.  ...  Robots have proven very useful in automating industrial processes.  ...  By contrast, robotic control is intrinsically a reinforcement learning problem, in which the optimal desired actuator signals are not known a priori.  ... 
doi:10.3389/fnbot.2017.00016 pmid:28396634 pmcid:PMC5366341 fatcat:iz2jxwlxgfaxrcldq23gvilvvy

2021 Index IEEE Transactions on Robotics Vol. 37

2021 IEEE Transactions on robotics  
The Author Index contains the primary entry for each item, listed under the first author's name.  ...  Wing Hummingbird Robot Via Reinforcement Learning.  ...  Via Reinforcement Learning.  ... 
doi:10.1109/tro.2022.3141270 fatcat:wbcpmrap6ndprec2gtu7m2yhmy

SOLAR: Deep Structured Representations for Model-Based Reinforcement Learning [article]

Marvin Zhang, Sharad Vikram, Laura Smith, Pieter Abbeel, Matthew J. Johnson, Sergey Levine
2019 arXiv   pre-print
Model-based reinforcement learning (RL) has proven to be a data efficient approach for learning control tasks but is difficult to utilize in domains with complex observations such as images.  ...  In this paper, we present a method for learning representations that are suitable for iterative model-based policy improvement, even when the underlying dynamical system has complex dynamics and image  ...  Introduction Model-based reinforcement learning (RL) methods use known or learned models in a variety of ways, such as planning through the model and generating synthetic experience (Sutton, 1990; Kober  ... 
arXiv:1808.09105v4 fatcat:jpgdhn6b35ec5k3erhgdhp4ofy
« Previous Showing results 1 — 15 out of 37 results