Filters








5,553 Hits in 11.0 sec

Learning Task-Oriented Grasping for Tool Manipulation from Simulated Self-Supervision [article]

Kuan Fang, Yuke Zhu, Animesh Garg, Andrey Kurenkov, Viraj Mehta, Li Fei-Fei, Silvio Savarese
2018 arXiv   pre-print
In this paper, we propose the Task-Oriented Grasping Network (TOG-Net) to jointly optimize both task-oriented grasping of a tool and the manipulation policy for that tool.  ...  The training process of the model is based on large-scale simulated self-supervision with procedurally generated tool objects.  ...  We thank Erwin Coumans for helping with the Bullet simulator.  ... 
arXiv:1806.09266v1 fatcat:oidrtb63kfbhbnummkgbt2a5jq

Learning Task-Oriented Grasping for Tool Manipulation from Simulated Self-Supervision

Kuan Fang, Yuke Zhu, Animesh Garg, Andrey Kurenkov, Viraj Mehta, Li Fei-Fei, Silvio Savarese
2018 Robotics: Science and Systems XIV  
In this paper, we propose the Task-Oriented Grasping Network (TOG-Net) to jointly optimize both task-oriented grasping of a tool and the manipulation policy for that tool.  ...  The training process of the model is based on largescale simulated self-supervision with procedurally generated tool objects.  ...  We thank Erwin Coumans for helping with the Bullet simulator.  ... 
doi:10.15607/rss.2018.xiv.012 dblp:conf/rss/FangZGKMFS18 fatcat:2t4kddmxs5d2rlbav6oxoswqe4

KETO: Learning Keypoint Representations for Tool Manipulation [article]

Zengyi Qin, Kuan Fang, Yuke Zhu, Li Fei-Fei, Silvio Savarese
2019 arXiv   pre-print
The model is learned from self-supervised robot interactions in the task environment without the need for explicit human annotations.  ...  We aim to develop an algorithm for robots to manipulate novel objects as tools for completing different task goals.  ...  We thank Ming Luo for her time and effort in co-organizing UGVR. We thank Roberto Martín-Martín for his constructive advice in problem formulation.  ... 
arXiv:1910.11977v2 fatcat:fjdcrdmbu5g5lmvdf534d6diu4

Active Perception and Representation for Robotic Manipulation [article]

Youssef Zaky, Gaurav Paruthi, Bryan Tripp, James Bergstra
2020 arXiv   pre-print
We apply our model to a simulated grasping task with a 6-DoF action space. Compared to its passive, fixed-camera counterpart, the active model achieves 8% better performance in targeted grasping.  ...  Our agent uses viewpoint changes to localize objects, to learn state representations in a self-supervised manner, and to perform goal-directed actions.  ...  A third way we exploit viewpoint changes is for multiple-view self-supervised representation learning.  ... 
arXiv:2003.06734v1 fatcat:c7itiu3d7zgzvgdyqoer23fzfi

Multi-Modal Learning of Keypoint Predictive Models for Visual Object Manipulation [article]

Sarah Bechtle, Neha Das, Franziska Meier
2021 arXiv   pre-print
Finally we show that this extended kinematic chain lends itself for object manipulation tasks such as placing a grasped object and present experiments in simulation and on hardware.  ...  In this work, we develop an self-supervised approach that can extend a robot's kinematic model when grasping an object from visual latent representations.  ...  Our method builds upon an self-supervised keypoint representation learning approach in [2] , [3] , which [4] , [16] utilize for object manipulation tasks.  ... 
arXiv:2011.03882v2 fatcat:t75e26o25zc6dk7q6yv5fjhgim

Towards Robotic Assembly by Predicting Robust, Precise and Task-oriented Grasps [article]

Jialiang Zhao, Daniel Troniak, Oliver Kroemer
2020 arXiv   pre-print
Robust task-oriented grasp planning is vital for autonomous robotic precision assembly tasks.  ...  Our policies are trained using a curriculum based on large-scale self-supervised grasp simulations with procedurally generated objects.  ...  Self-Supervised Data Collection in Simulation To generate grasps for training the networks, we first procedurally generate 5, 000 object models for each of the tasks.  ... 
arXiv:2011.02462v1 fatcat:nz2wu56jnbgfdhdpwwvbngztzu

Review of Deep Reinforcement Learning-based Object Grasping: Techniques, Open Challenges and Recommendations

Marwan Qaid Mohammed, Kwek Lee Chung, Chua Shing Chyi
2020 IEEE Access  
Object grasping requires detection systems, methods and tools to facilitate efficient and fast agent training.  ...  This review refers to all relevant articles on deep reinforcement learning-based object manipulation and solutions. The object grasping issue is a major manipulation challenge.  ...  Different feasible framework based pixel input has been well studied such as adversarial learning based on ConvNet after AlexNet for effective supervise learning [273] , tool use based on task-oriented  ... 
doi:10.1109/access.2020.3027923 fatcat:44xyylmy7fhirjept76wm6uaeq

KOVIS: Keypoint-based Visual Servoing with Zero-Shot Sim-to-Real Transfer for Robotics Manipulation [article]

En Yen Puang and Keng Peng Tee and Wei Jing
2020 arXiv   pre-print
The two networks are trained end-to-end in the simulated environment by self-supervised learning without manual data labeling.  ...  We present KOVIS, a novel learning-based, calibration-free visual servoing method for fine robotic manipulation tasks with eye-in-hand stereo camera system.  ...  ACKNOWLEDGEMENT This research is supported by the Agency for Science, Technology and Research (A*STAR), Singapore, under its AME Programmatic Funding Scheme (Project #A18A2b0046).  ... 
arXiv:2007.13960v1 fatcat:nfyc54hmgvcgxl7bvxeablc2mm

A Survey: Robot Grasping [article]

Kenechi Dukor, Tejumade Afonja
2021 Zenodo  
when executing a grasping task.  ...  Today, we can observe the successful application of classical machine learning, computer vision, and reinforcement learning in various robotic tasks like path planning, perception, locomotion, grasping  ...  It provides a framework and set of tools for learning dexterous manipulations from start to finish, directly from raw sensory input.  ... 
doi:10.5281/zenodo.5559125 fatcat:zw4zokcjzzchhc6dqvlg5ftohq

Learning Arbitrary-Goal Fabric Folding with One Hour of Real Robot Experience [article]

Robert Lee, Daniel Ward, Akansel Cosgun, Vibhavari Dasagi, Peter Corke, Jurgen Leitner
2020 arXiv   pre-print
In this paper, we show that it is possible to learn fabric folding skills in only an hour of self-supervised real robot experience, without human supervision or simulation.  ...  We demonstrate our method on a set of towel-folding tasks, and show that our approach is able to discover sequential folding strategies, purely from trial-and-error.  ...  In this paper we present a method for a robot to learn to manipulate deformable objects directly in the real world, without reward function engineering, human supervision, human demonstration or simulation  ... 
arXiv:2010.03209v1 fatcat:wzajwisuizhbzla6wbohudqm7u

Making Sense of Vision and Touch: Self-Supervised Learning of Multimodal Representations for Contact-Rich Tasks [article]

Michelle A. Lee, Yuke Zhu, Krishnan Srinivasan, Parth Shah, Silvio Savarese, Li Fei-Fei, Animesh Garg, Jeannette Bohg
2019 arXiv   pre-print
We use self-supervision to learn a compact and multimodal representation of our sensory inputs, which can then be used to improve the sample efficiency of our policy learning.  ...  Results for simulated and real robot experiments are presented.  ...  The force reading transitions from Fig. 2 : 2 Neural network architecture for multimodal representation learning with self-supervision.  ... 
arXiv:1810.10191v2 fatcat:uj3cpmgk7rdw7csyc52blkd6tq

Toward Sim-to-Real Directional Semantic Grasping [article]

Shariq Iqbal, Jonathan Tremblay, Thang To, Jia Cheng, Erik Leitch, Andy Campbell, Kirby Leung, Duncan McKay, Stan Birchfield
2020 arXiv   pre-print
The system is an example of end-to-end (mapping input monocular RGB images to output Cartesian motor commands) grasping of objects from multiple pre-defined object-centric orientations, such as from the  ...  We address the problem of directional semantic grasping, that is, grasping a specific object from a specific direction.  ...  ACKNOWLEDGMENTS We thank Stephen Tyree, Iuri Frosio, Ryan Oldja, and Abhishek Raj Dutta for their help with the project.  ... 
arXiv:1909.02075v3 fatcat:b77jaqiv75ckphtjyhd5dy6gou

Robotics Dexterous Grasping: The Methods Based on Point Cloud and Deep Learning

Haonan Duan, Peng Wang, Yayu Huang, Guangyun Xu, Wei Wei, Xiaofei Shen
2021 Frontiers in Neurorobotics  
A comprehensive review of the methods based on point cloud and deep learning for robotics dexterous grasping from three perspectives is given in this paper.  ...  This review aims to afford a guideline for robotics dexterous grasping researchers and developers.  ...  Reinforcement learning is currently the most powerful tool for self-exploration.  ... 
doi:10.3389/fnbot.2021.658280 pmid:34177509 pmcid:PMC8221534 fatcat:b33jc3vjsnd2taidrlznl3l3sq

Recognizing Object Affordances to Support Scene Reasoning for Manipulation Tasks [article]

Fu-Jen Chu, Ruinian Xu, Chao Tang, Patricio A. Vela
2020 arXiv   pre-print
Additionally, task-oriented grasping for cutting and pounding actions demonstrate the exploitation of multiple affordances for a given object to complete specified tasks.  ...  AffContext is linked to the Planning Domain Definition Language (PDDL) with an augmented state keeper for action planning across temporally spaced goal-oriented tasks.  ...  Several robotic manipulation experiments ranging from simple movements, to task-oriented grasping, to goal-oriented tasks demonstrate that the approach translates to actual manipulation for an embodied  ... 
arXiv:1909.05770v2 fatcat:qzinrh63srg2pdhfuu42liiqlm

droidlet: modular, heterogenous, multi-modal agents [article]

Anurag Pratik, Soumith Chintala, Kavya Srinet, Dhiraj Gandhi, Rebecca Qian, Yuxuan Sun, Ryan Drew, Sara Elkafrawy, Anoushka Tiwari, Tucker Hart, Mary Williamson, Abhinav Gupta (+1 others)
2021 arXiv   pre-print
It allows us to exploit both large-scale static datasets in perception and language and sophisticated heuristics often used in robotics; and provides tools for interactive annotation.  ...  On the other hand, in the field of robotics, large-scale learning has always been difficult. Supervision is hard to gather and real world physical interactions are expensive.  ...  Simulations allow researchers to explore self-supervised and self-directed learning agents that have access to large data, and so build on recent advances in ML that have demonstrated great success in  ... 
arXiv:2101.10384v1 fatcat:llxpj5s2vvdvzd5ecx54g7amza
« Previous Showing results 1 — 15 out of 5,553 results