Filters








138,570 Hits in 3.0 sec

Functional object-oriented network for manipulation learning

David Paulius, Yongqiang Huang, Roger Milton, William D. Buchanan, Jeanine Sam, Yu Sun
2016 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)  
This paper presents a novel structured knowledge representation called the functional object-oriented network (FOON) to model the connectivity of the functional-related objects and their motions in manipulation  ...  The graphical model FOON is learned by observing object state change and human manipulations with the objects.  ...  FUNCTIONAL OBJECT-ORIENTED NETWORK The proposed FOON is a bipartite network that contains motion nodes and object state nodes.  ... 
doi:10.1109/iros.2016.7759413 dblp:conf/iros/PauliusHMBSS16 fatcat:i6emxaixs5br7bftpoabzjmbhq

AI Meets Physical World -- Exploring Robot Cooking [article]

Yu Sun
2018 arXiv   pre-print
After "watching" over 200 instructional videos, a functional object-oriented network (FOON) is constructed to represent the observed manipulation skills.  ...  Using the network, robots can take a high-level task command such as "I want BBQ Ribs for dinner," decipher the task goal, seek the correct objects to operate on, and then generate and execute a sequence  ...  From the video, we construct a functional object-oriented network (FOON) to represent manipulation knowledge ( Figure 1 ) [18, 20, 25] .  ... 
arXiv:1804.07974v1 fatcat:m37e6wxzq5gv7m26aunyb4tuxq

Robotic Grasp Manipulation Using Evolutionary Computing and Deep Reinforcement Learning [article]

Priya Shukla, Hitesh Kumar, G. C. Nandi
2020 arXiv   pre-print
Intelligent Object manipulation for grasping is a challenging problem for robots. Unlike robots, humans almost immediately know how to manipulate objects for grasping due to learning over the years.  ...  Further for grasp orientation learning, we develop a deep reinforcement learning (DRL) model which we name as Grasp Deep Q-Network (GDQN) and benchmarked our results with Modified VGG16 (MVGG16).  ...  Figure 1 : 1 Grasp Pose estimation and Manipulation Details. Figure 2 : 2 Grasp Cycle for Orientation Learning. Figure 3 : 3 Vision: Object Detection by YOLO.  ... 
arXiv:2001.05443v1 fatcat:l4nbiqhetrgo3ij5r4ha264suy

Learning Task-Oriented Grasping for Tool Manipulation from Simulated Self-Supervision

Kuan Fang, Yuke Zhu, Animesh Garg, Andrey Kurenkov, Viraj Mehta, Li Fei-Fei, Silvio Savarese
2018 Robotics: Science and Systems XIV  
In this paper, we propose the Task-Oriented Grasping Network (TOG-Net) to jointly optimize both task-oriented grasping of a tool and the manipulation policy for that tool.  ...  Tool manipulation is vital for facilitating robots to complete challenging task goals.  ...  We thank Erwin Coumans for helping with the Bullet simulator.  ... 
doi:10.15607/rss.2018.xiv.012 dblp:conf/rss/FangZGKMFS18 fatcat:2t4kddmxs5d2rlbav6oxoswqe4

Learning Task-Oriented Grasping for Tool Manipulation from Simulated Self-Supervision [article]

Kuan Fang, Yuke Zhu, Animesh Garg, Andrey Kurenkov, Viraj Mehta, Li Fei-Fei, Silvio Savarese
2018 arXiv   pre-print
In this paper, we propose the Task-Oriented Grasping Network (TOG-Net) to jointly optimize both task-oriented grasping of a tool and the manipulation policy for that tool.  ...  Tool manipulation is vital for facilitating robots to complete challenging task goals.  ...  We thank Erwin Coumans for helping with the Bullet simulator.  ... 
arXiv:1806.09266v1 fatcat:oidrtb63kfbhbnummkgbt2a5jq

Recognizing Object Affordances to Support Scene Reasoning for Manipulation Tasks [article]

Fu-Jen Chu, Ruinian Xu, Chao Tang, Patricio A. Vela
2020 arXiv   pre-print
Additionally, task-oriented grasping for cutting and pounding actions demonstrate the exploitation of multiple affordances for a given object to complete specified tasks.  ...  Visual benchmarking shows that the trained network, called AffContext, reduces the performance gap between object-agnostic and object-informed affordance recognition.  ...  The deep network learns to generalize affordance segmentation across unseen object categories in support of robotic manipulation.  ... 
arXiv:1909.05770v2 fatcat:qzinrh63srg2pdhfuu42liiqlm

Learning Dexterous Manipulation Policies from Experience and Imitation [article]

Vikash Kumar, Abhishek Gupta, Emanuel Todorov, Sergey Levine
2016 arXiv   pre-print
We explore learning-based approaches for feedback control of a dexterous five-finger hand performing non-prehensile manipulation.  ...  Nevertheless, the neural network has its advantages: it uses only tactile and proprioceptive feedback but no visual feedback about the object (i.e. it performs the task blind) and learns a time-invariant  ...  Here, the cost function included an extra term for desired object position and orientation. The final cost was scaled by a factor of 2 relative to the running cost.  ... 
arXiv:1611.05095v1 fatcat:eshs7ar6ybfsfm5ni43f36jipe

Artificial Neural Networks in Robotic Applications

Şeref Sağıroğlu
1998 Mathematical and Computational Applications  
This-paper examines the use of Artificial Neural Networks (ANNs) as a new technique to solve such problems in the field of robotics.  ...  Fig. 2 2 shows how an ANN learns a forward kinematics. For the both kinematics, ANNs learn the robot function without requiring any priori information of the manipulator.  ...  Suzuki, Feedback-Error-Learning Neural Network for Trajectory Control of a Robotic Manipulator, Nelll'Lll Networks, 251-265, 1988. 41] M Y Kawato, Y Uno, M. Isobe and R.  ... 
doi:10.3390/mca3020067 fatcat:drs6pwbncne3phf7czgqnyixly

A Road-map to Robot Task Execution with the Functional Object-Oriented Network [article]

David Paulius, Alejandro Agostini, Yu Sun, Dongheui Lee
2021 arXiv   pre-print
Following work on joint object-action representations, the functional object-oriented network (FOON) was introduced as a knowledge graph representation for robots.  ...  In this work, we outline a road-map for future development of FOON and its application in robotic systems for task planning as well as knowledge acquisition from demonstration.  ...  FUNCTIONAL OBJECT-ORIENTED NETWORK A. Basics of a FOON A FOON consists of two types of nodes in its bipartite structure: object nodes and motion nodes.  ... 
arXiv:2106.00158v1 fatcat:7xu4oesa2jgnxfs67rumqlw57m

Active Perception and Representation for Robotic Manipulation [article]

Youssef Zaky, Gaurav Paruthi, Bryan Tripp, James Bergstra
2020 arXiv   pre-print
Our agent uses viewpoint changes to localize objects, to learn state representations in a self-supervised manner, and to perform goal-directed actions.  ...  In contrast, recent applications of reinforcement learning in robotic manipulation employ cameras as passive sensors. These are carefully placed to view a scene from a fixed pose.  ...  For detailed derivations of the loss functions for policy and value learning we refer the reader to [28] .  ... 
arXiv:2003.06734v1 fatcat:c7itiu3d7zgzvgdyqoer23fzfi

Learning to Move an Object by the Humanoid Robots by Using Deep Reinforcement Learning [chapter]

Simge Nur Aslan, Burak Taşçı, Ayşegül Uçar, Cüneyt Güzeli˙ş
2021 Ambient Intelligence and Smart Environments  
This paper proposes an algorithm for learning to move the desired object by humanoid robots.  ...  Deep Deterministic Policy Gradient (DDPG) network is used for grasping by means of the continuous actions.  ...  DQN is applied for the robot to walk to the target. DDPG is applied for the robot to manipulate the target object.  ... 
doi:10.3233/aise210092 fatcat:ivxlgkkggbbi3cortv5mzkxwii

Real-time Human-Robot Collaborative Manipulations of Cylindrical and Cubic Objects via Geometric Primitives and Depth Information [article]

Huixu Dong, Jiadong Zhou, Haoyong Yu
2021 arXiv   pre-print
Thus, it is available for robots to manipulate them through the real-time detection of elliptic and rectangle shape primitives formed by the circular and rectangle tops of these objects.  ...  We devise a robust grasping system that enables a robot to manipulate cylindrical and cubic objects in collaboration scenarios by the proposed perception strategy including the detection of elliptic and  ...  is an end-to-end learning model, we just need to summarize a global loss function for ellipse or rectangle detection.  ... 
arXiv:2106.14461v1 fatcat:m3fcikbgprc4ji6lb64afvjxsi

Modeling and Control of 5DOF Robot Arm Using Fuzzy Logic Supervisory Control

Mohammad Amin Rashidifar, Ali Amin Rashidifar, Darvish Ahmadi
2013 IAES International Journal of Robotics and Automation  
The key objective of the paper is definitely to model the robotic arm by using D-H parameters.  ...  This paper aims to model the forward and inverse kinematics of a 5 DOF Robotic Arm for easy pick and place application. An over-all D-H representation of forward and inverse matrix is obtained.  ...  This learning method works similarly to that particular of neural networks. The Fuzzy Logic Toolbox function that accomplishes this membership function parameter adjustment is called ANFIS.  ... 
doi:10.11591/ijra.v2i2.2974 fatcat:grnazcnrdbbfrgzykw6bwp25iq

Full Workspace Generation of Serial-link Manipulators by Deep Learning based Jacobian Estimation [article]

Peiyuan Liao, Jiajun Mao
2018 arXiv   pre-print
Apart from solving complicated problems that require a certain level of intelligence, fine-tuned deep neural networks can also create fast algorithms for slow, numerical tasks.  ...  In this paper, we introduce an improved version of [1]'s work, a fast, deep-learning framework capable of generating the full workspace of serial-link manipulators.  ...  In an object-oriented programming perspective, for a Manipulator object we would have two methods: Manipulator.forwardKine(q) that corresponds to K(q) and Manipulator.inverseKine(xi) that corresponds to  ... 
arXiv:1809.05020v2 fatcat:fc7dhhxxr5cydkplhdsl7qxxr4

Learning Visual Affordances with Target-Orientated Deep Q-Network to Grasp Objects by Harnessing Environmental Fixtures [article]

Hengyue Liang, Xibai Lou, Yang Yang, Changhyun Choi
2021 arXiv   pre-print
We formulate the problem as visual affordances learning for which Target-Oriented Deep Q-Network (TO-DQN) is proposed to efficiently learn visual affordance maps (i.e., Q-maps) to guide robot actions.  ...  This paper introduces a challenging object grasping task and proposes a self-supervised learning approach.  ...  Deep Q-Network: DQN is a model-free RL alorithm that features a deep neural network as Q-function approximator, a discrete action space, and experience replay for stable learning convergence [9] .  ... 
arXiv:1910.03781v2 fatcat:pqvt7a74rjg6hiy566nxbgslca
« Previous Showing results 1 — 15 out of 138,570 results