Filters








205 Hits in 6.5 sec

A review on manipulation skill acquisition through teleoperation‐based learning from demonstration

Weiyong Si, Ning Wang, Chenguang Yang
2021 Cognitive Computation and Systems  
To this end, the key technologies, for example, manipulation skill learning, multimodal interfacing for teleoperation and telerobotic control, are introduced.  ...  Thus, it is a promising way to transfer manipulation skills from humans to robots by combining the learning methods and teleoperation, and adapting the learned skills to different tasks in new situations  ...  There exist control issues for the bilateral teleoperation to assist the human-like manipulation skill learning, for example, teleoperation control, manipulation control.  ... 
doi:10.1049/ccs2.12005 fatcat:wxyourkvrvcqlh6aht6g5fi3sy

Recent Advances in Robot Learning from Demonstration

Harish Ravichandar, Athanasios S. Polydoros, Sonia Chernova, Aude Billard
2019 Annual Review of Control Robotics and Autonomous Systems  
In the context of robotics and automation, learning from demonstration (LfD) is the paradigm in which robots acquire new skills by learning to imitate an expert.  ...  This review aims to provide an overview of the collection of machine-learning methods used to enable a robot to learn from and imitate a teacher.  ...  Mobile Robots In addition to manipulators, LfD has enjoyed considerable success in a variety of mobile robots.  ... 
doi:10.1146/annurev-control-100819-063206 fatcat:gz56an5s6zh7da7my4ix7gdj4q

Table of contents

2006 Proceedings 2006 IEEE International Conference on Robotics and Automation, 2006. ICRA 2006.  
Burdick Feature Extraction from Laser Scan Data based on Curvature Estimation for Mobile Robotics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  ...  Atkeson Self-Organizing Approach for Robot's Behavior Imitation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3350 Sathit Wanitchaikit, Poj Tangamchit, Thavida Maneewarn Th-AM2-04 Mobile  ...  Tarn High-Speed Focusing of Cells Using Depth-From-Diffraction Method . . . . . . . . . . . . . . . . . . . 3636 Hiromasa Oku, Theodorus, Koichi Hashimoto, Masatoshi Ishikawa Learning Interaction Force  ... 
doi:10.1109/robot.2006.1641151 fatcat:f5zhr6x7trhazlcmsvfdhkpqm4

An Algorithmic Perspective on Imitation Learning

Takayuki Osa, Joni Pajarinen, Gerhard Neumann, J. Andrew Bagnell, Pieter Abbeel, Jan Peters
2018 Foundations and Trends in Robotics  
This process of learning from demonstrations, and the study of algorithms to do so, is called imitation learning. This work provides an introduction to imitation learning.  ...  Second, we want to give roboticists and experts in applied artificial intelligence a broader appreciation for the frameworks and tools available for imitation learning.  ...  For example, motion capture systems and teleoperated robotic systems record data from expert behavior.  ... 
doi:10.1561/2300000053 fatcat:4v52sabhnze5ddnuy7sd3vj2ym

Table of Contents

2020 IEEE Robotics and Automation Letters  
Amato 6161 Imitation Learning Based on Bilateral Control for Human-Robot Cooperation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  ...  Valdastri 6528 Simultaneously Learning Corrections and Error Models for Geometry-Based Visual Odometry Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  ... 
doi:10.1109/lra.2020.3030731 fatcat:kwx4xyitfbfuzgugbi5vavx2xu

Learning from Demonstration for Hydraulic Manipulators

Markku Suomalainen, Janne Koivumaki, Santeri Lampinen, Ville Kyrki, Jouni Mattila
2018 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)  
However, to learn how to deal with uncertainties we require data from nonperfect demonstrations.  ...  We assume to have data from only one source -thus we will not consider who-to-imitate, where the objective would be to, for example, to decide whether Cartesian or joint trajectory is more important to  ... 
doi:10.1109/iros.2018.8594285 dblp:conf/iros/SuomalainenKLKM18 fatcat:lb6be3llrjfmpmnwqbsirioine

2020 Index IEEE Robotics and Automation Letters Vol. 5

2020 IEEE Robotics and Automation Letters  
., +, LRA Oct. 2020 5291-5298 Mean square error methods Gated Recurrent Fusion to Learn Driving Behavior from Temporal Multimodal Data.  ...  ., +, LRA April 2020 2357-2364 Federated Imitation Learning: A Novel Framework for Cloud Robotic Systems With Heterogeneous Sensor Data.  ... 
doi:10.1109/lra.2020.3032821 fatcat:qrnouccm7jb47ipq6w3erf3cja

Human - Robot Interfacing by the Aid of Cognition Based Interaction [chapter]

Aarne Halme
2008 Advances in Service Robotics  
Almost any mobile service robot with manipulation capability and with similar subsystem infrastructure could be used as the test robot as well.  ...  The data from interface devices and detected gestures are unambiguous and can be forwarded to the manager.  ...  .), ISBN: 978-953-7619-02-2, InTech, Available from: http://www.intechopen.com/books/advances_in_service_robotics/human_robot_interfacing_by_the_aid_of_cog nition_based_interaction  ... 
doi:10.5772/5957 fatcat:yfpbrvexnfb4rdjub5xbkr6gz4

2019 Index IEEE Robotics and Automation Letters Vol. 4

2019 IEEE Robotics and Automation Letters  
., +, LRA Oct. 2019 3161-3168 A Teleoperation Interface for Loco-Manipulation Control of Mobile Collaborative Robotic Assistant.  ...  ., +, LRA July 2019 2289-2295 A Teleoperation Interface for Loco-Manipulation Control of Mobile Collaborative Robotic Assistant.  ...  Permanent magnets Adaptive Dynamic Control for Magnetically Actuated Medical Robots.  ... 
doi:10.1109/lra.2019.2955867 fatcat:ckastwefh5chhamsravandtnx4

Towards open and expandable cognitive AI architectures for large-scale multi-agent human-robot collaborative learning

Georgios Th. Papadopoulos, Margherita Antona, Constantine Stephanidis
2021 IEEE Access  
from the data.  ...  In particular, the most dominant learning paradigm in this area, the so-called Learning from Demonstration (LfD), relies on the fundamental principle of robots acquiring new skills by learning to imitate  ... 
doi:10.1109/access.2021.3080517 fatcat:nzxzaxbx2jaf5owuihlxxfizuq

User-oriented Natural Human-Robot Control with Thin-Plate Splines and LRCN [article]

Bruno Lima, Lucas Amaral, Givanildo Nascimento-Jr, Victor Mafra, Bruno Georgevich Ferreira, Tiago Vieira, Thales Vieira
2021 arXiv   pre-print
We propose a real-time vision-based teleoperation approach for robotic arms that employs a single depth-based camera, exempting the user from the need for any wearable devices.  ...  The second is a Deep Neural Network hand-state classifier based on Long-term Recurrent Convolutional Networks (LRCN) that exploits the temporal coherence of the acquired depth data.  ...  for the development of this project.  ... 
arXiv:2105.11056v1 fatcat:eosbqmnssrcknelhfipvsk3nni

Table of Contents

2020 IEEE Robotics and Automation Letters  
Beltrame 1656 Learning Transformable and Plannable se(3) Features for Scene Imitation of a Mobile Service Robot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  ...  Yip 2349 Faster Confined Space Manufacturing Teleoperation Through Dynamic Autonomy With Task Dynamics Imitation Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  ... 
doi:10.1109/lra.2020.2987582 fatcat:3qafzip5xrg5jliyngq4xxvjha

Haptic-enabled Mixed Reality System for Mixed-initiative Remote Robot Control [article]

Yuan Tian, Lianjun Li, Andrea Fumagalli, Yonas Tadesse, Balakrishnan Prabhakaran
2021 arXiv   pre-print
robot teleoperations and robot-human collaboration, and enhanced feedback for mixed-initiative control.  ...  In recent years, benefiting from the high-quality data from Light Detection and Ranging (LIDAR) and RGBD cameras, mixed reality is widely used to build networked platforms to improve the performance of  ...  If big error means the newly planned position sent from server to robot is very far from the real position.  ... 
arXiv:2102.03521v2 fatcat:3e7h6h6fhzh3nmra2f2ijcxkzq

Autonomously learning to visually detect where manipulation will succeed

Hai Nguyen, Charles C. Kemp
2013 Autonomous Robots  
We present methods that enable a mobile manipulator to autonomously learn a function that takes an RGB image and a registered 3D point cloud as input and returns a 3D location at which a manipulation behavior  ...  Visual features can help predict if a manipulation behavior will succeed at a given location. For example, the success of a behavior that flips light switches depends on the location of the switch.  ...  Acknowledgments We thank Aaron Bobick, Jim Rehg, and Tucker Hermans for their input. We thank Willow Garage for the use of a PR2 robot, financial support, and other assistance.  ... 
doi:10.1007/s10514-013-9363-y fatcat:p4sgxavczbhypilmmaclrpvome

Learning and Comfort in Human–Robot Interaction: A Review

Weitian Wang, Yi Chen, Rui Li, Yunyi Jia
2019 Applied Sciences  
In this paper, we present a comprehensive review for two significant topics in human–robot interaction: robots learning from demonstrations and human comfort.  ...  The collaboration quality between the human and the robot has been improved largely by taking advantage of robots learning from demonstrations.  ...  For example, the force-sensing glove can be used for acquiring data from pressure sensors.  ... 
doi:10.3390/app9235152 fatcat:67n52vkggbhtzlfz53bz5bglna
« Previous Showing results 1 — 15 out of 205 results