Filters








757 Hits in 4.5 sec

Learning Visual Servoing with Deep Features and Fitted Q-Iteration [article]

Alex X. Lee, Sergey Levine, Pieter Abbeel
2017 arXiv   pre-print
A key component of our approach is to use a sample-efficient fitted Q-iteration algorithm to learn which features are best suited for the task at hand.  ...  We demonstrate that standard deep features, in our case taken from a model trained for object classification, can be used together with a bilinear predictive model to learn an effective visual servo that  ...  ACKNOWLEDGEMENTS This research was funded in part by the Army Research Office through the MAST program, the Berkeley DeepDrive consortium, and NVIDIA. Alex Lee was also supported by the NSF GRFP.  ... 
arXiv:1703.11000v2 fatcat:34qmpaeipbdmpmpd3rac24lbpi

Deep Direct Visual Servoing of Tendon-Driven Continuum Robots [article]

Ibrahim Abdulhafiz, Ali A. Nazari, Taha Abbasi-Hashemi, Amir Jalali, Kourosh Zareinia, Sajad Saeedi, Farrokh Janabi-Sharifi
2022 arXiv   pre-print
This paper presents the control of a single-section tendon-driven continuum robot using a modified VGG-16 deep learning network and an eye-in-hand direct visual servoing approach.  ...  We hypothesize that employing deep learning models and implementing direct visual servoing can effectively resolve the issue by eliminating such intermediate steps, enabling control of a continuum robot  ...  The success of the feature-based visual servoing, in fact, depends on the tracking success and performance, i.e., the speed, accuracy, robustness, and redundancy of the visual features [19] .  ... 
arXiv:2111.02580v3 fatcat:svuxpjyb4vfejby45zl4a342bq

Robotic Grasping using Deep Reinforcement Learning [article]

Shirin Joshi, Sulabh Kumra, Ferat Sahin
2020 arXiv   pre-print
We use the double deep Q-learning framework along with a novel Grasp-Q-Network to output grasp probabilities used to learn grasps that maximize the pick success.  ...  The use of a deep learning based approach reduces the complexity caused by the use of hand-designed features.  ...  CONCLUSIONS A method for learning robust grasps is presented using a deep reinforcement learning framework that consists of a Grasp-Q-Network which produces grasp probabilities and a visual servoing mechanism  ... 
arXiv:2007.04499v1 fatcat:l636nzcmqbd2dfek7djyrchlq4

Visual Servoing for Pose Control of Soft Continuum Arm in a Structured Environment [article]

Shivani Kamtikar, Samhita Marri, Benjamin Walt, Naveen Kumar Uppalapati, Girish Krishnan, Girish Chowdhary
2022 arXiv   pre-print
However, robust visual servoing is challenging as it requires reliable feature extraction from the image, accurate control models and sensors to perceive the shape of the arm, both of which can be hard  ...  This letter circumvents these challenges by presenting a deep neural network-based method to perform smooth and robust 3D positioning tasks on a soft arm by visual servoing using a camera mounted at the  ...  Recent advances in visual servoing and deep learning in robots can be effectively used to overcome the limitations in both sensing and modeling of SCA.  ... 
arXiv:2202.05200v2 fatcat:sk7jptab7vdxhbnaty25osx2lu

Siamese Convolutional Neural Network for Sub-millimeter-accurate Camera Pose Estimation and Visual Servoing [article]

Cunjun Yu, Zhongang Cai, Hung Pham, Quang-Cuong Pham
2019 arXiv   pre-print
The key feature of our neural network is that it outputs the relative pose between any pair of images, and does so with sub-millimeter accuracy.  ...  Visual Servoing (VS), where images taken from a camera typically attached to the robot end-effector are used to guide the robot motions, is an important technique to tackle robotic tasks that require a  ...  Deep learning-based Visual Servoing Deep learning-based visual servoing often performs camera pose estimation iteratively while guiding the motion of the robot towards the target pose so as to achieve  ... 
arXiv:1903.04713v1 fatcat:clnmrldyx5gzvpwbjkajvk3ivm

Learning Hand-Eye Coordination for Robotic Grasping with Deep Learning and Large-Scale Data Collection [article]

Sergey Levine, Peter Pastor, Alex Krizhevsky, Deirdre Quillen
2016 arXiv   pre-print
To train our network, we collected over 800,000 grasp attempts over the course of two months, using between 6 and 14 robotic manipulators at any given time, with differences in camera placement and hardware  ...  This requires the network to observe the spatial relationship between the gripper and objects in the scene, thus learning hand-eye coordination.  ...  robots, Max Bajracharya and Nicolas Hudson for providing us with a baseline perception pipeline, and Vincent Vanhoucke and Jeff Dean for support and organization.  ... 
arXiv:1603.02199v4 fatcat:kppwik5y3vfx5bzsovt47udd3a

Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection

Sergey Levine, Peter Pastor, Alex Krizhevsky, Julian Ibarz, Deirdre Quillen
2017 The international journal of robotics research  
To train our network, we collected over 800,000 grasp attempts over the course of two months, using between 6 and 14 robotic manipulators at any given time, with differences in camera placement and hardware  ...  This requires the network to observe the spatial relationship between the gripper and objects in the scene, thus learning hand-eye coordination.  ...  robots, Max Bajracharya and Nicolas Hudson for providing us with a baseline perception pipeline, and Vincent Vanhoucke and Jeff Dean for support and organization.  ... 
doi:10.1177/0278364917710318 fatcat:shisdrnqireejc2zp5zp2u5z4m

Learning a generative model for robot control using visual feedback [article]

Nishad Gothoskar, Miguel Lázaro-Gredilla, Abhishek Agarwal, Yasemin Bekiroglu, Dileep George
2020 arXiv   pre-print
This, in turn, guides motion of the robot and allows for matching the target locations of the features in significantly fewer steps than state-of-the-art visual servoing methods.  ...  We demonstrate the effectiveness of our method by executing grasping and tight-fit insertions on robots with inaccurate controllers.  ...  Fast Servoing and Comparison with VISP Visual servoing is the technique of using visual feedback to control a robot.  ... 
arXiv:2003.04474v1 fatcat:nbwuaxhnpzcqtchorlxpdwnuce

Alignment Method of Combined Perception for Peg-in-Hole Assembly with Deep Reinforcement Learning

Yongzhi Wang, Lei Zhao, Qian Zhang, Ran Zhou, Liping Wu, Junqiao Ma, Bo Zhang, Yu Zhang, Kelvin Wong
2021 Journal of Sensors  
Finally, the agent will have learned the alignment skill of combined perception with the increase of iterative training.  ...  Therefore, this paper proposes the alignment method of combined perception for the peg-in-hole assembly with self-supervised deep reinforcement learning.  ...  In recent years, the field of visual perception has also made numerous research progress with the vigorous development of deep learning and deep reinforcement learning.  ... 
doi:10.1155/2021/5073689 fatcat:2mlvcinutvfcrpzt6zhdty2ete

Visual Guidance and Automatic Control for Robotic Personalized Stent Graft Manufacturing

Yu Guo, Miao Sun, Frank Po Wen Lo, Benny Lo
2019 2019 International Conference on Robotics and Automation (ICRA)  
The scale-invariant feature transform (SIFT) method and color filtering are implemented for stereo matching and feature identifications for object localization.  ...  To maintain the clear view of the sewing process, a visual-servoing system is developed for guiding the stereo microscopes for tracking the needle movements.  ...  Deep Reinforcement Learning based Visual-servoing The experiments for validating the proposed deep learning framework for visual-servoing were conducted in real-world scenarios with a 3-DOF robotic arm  ... 
doi:10.1109/icra.2019.8794123 dblp:conf/icra/GuoSLL19 fatcat:bbvufgy3unftxdycsfud7cotnm

Model Predictive Manipulation of Compliant Objects with Multi-Objective Optimizer and Adversarial Network for Occlusion Compensation [article]

Jiaming Qi, Dongyu Li, Yufeng Gao, Peng Zhou, David Navarro-Alarcon
2022 arXiv   pre-print
A deep adversarial network is developed to robustly compensate for visual occlusions in the camera's field of view, which enables to guide the shaping task even with partial observations of the object.  ...  Our method uses an efficient online surface/curve fitting algorithm that quantifies the object's geometry with a compact vector of features; This feedback-like vector enables to establish an explicit shape  ...  jl ( 14 ) where n x , n y ∈ N * are the fitting order along with x and y direction, and q jl ∈ R is the shape weight.  ... 
arXiv:2205.09987v1 fatcat:ebjk3klunvcgfdqkcib2qmr3li

Visual Reconstruction and Localization based Robust Robotic 6-DoF Grasping in the Wild

Ji Liang, Jiguang Zhang, Bingbing Pan, Shibiao Xu, Guangheng Zhao, Ge Yu, Xiaopeng Zhang
2021 IEEE Access  
Finally, our framework realizes full function of visual 6DoF robotic grasping, which includes two different visual servoing and grasp planning strategies for different objects grasping.  ...  Especially for objects with occlusion, singular shape or small scale, our method can still maintain robust grasping.  ...  It takes the color image obtained by binocular camera and the corresponding depth image as input, and applies Model-Free Deep Reinforcement Learning (Q-Learning) to calculate the expected Q value.  ... 
doi:10.1109/access.2021.3079245 fatcat:oyisfxi22zdfzi3ljrl7chorfa

Real-Time Deep Learning Approach to Visual Servo Control and Grasp Detection for Autonomous Robotic Manipulation [article]

E. G. Ribeiro, R. Q. Mendes, V. Grassi Jr
2020 arXiv   pre-print
Therefore, the second network is trained to perform a visual servo control, ensuring that the object remains in the robot's field of view.  ...  To the best of our knowledge, we have not found in the literature other works that achieve such precision with a controller learned from scratch.  ...  In a different approach, some works that explore deep learning in visual servoing do so through convolutional neural networks.  ... 
arXiv:2010.06544v1 fatcat:nrc3l7qpm5du3gsor7zw6y3lme

Model Reference Tracking Control Solutions for a Visual Servo System Based on a Virtual State from Unknown Dynamics

Timotei Lala, Darius-Pavel Chirla, Mircea-Bogdan Radac
2021 Energies  
This paper focuses on validating a model-free Value Iteration Reinforcement Learning (MFVI-RL) control solution on a visual servo tracking system in a comprehensive manner starting from theoretical convergence  ...  practical trade-offs, such as I/O data exploration quality and control performance leverage with data volume, control goal and controller complexity.  ...  For the MFVI-RL practical implementation of learning visual servo tracking, the Qfunction is a deep neural network of feedforward type, with two hidden layers having 100 and 50 rectified linear unit (ReLU  ... 
doi:10.3390/en15010267 fatcat:urhtuf5mx5he3noq5wx4imzbvi

DDCO: Discovery of Deep Continuous Options for Robot Learning from Demonstrations [article]

Sanjay Krishnan, Roy Fox, Ion Stoica, Ken Goldberg
2017 arXiv   pre-print
This paper studies an extension to robot imitation learning, called Discovery of Deep Continuous Options (DDCO), where low-level continuous control skills parametrized by deep neural networks are learned  ...  In prior work, we proposed an algorithm called Deep Discovery of Options (DDO) to discover options to accelerate reinforcement learning in Atari games.  ...  Acknowledgments This research was performed at the AUTOLAB at UC Berkeley and the Real-Time Intelligent Secure Execution (RISE) Lab in affiliation with the Berkeley AI Research (BAIR) Lab and the CITRIS  ... 
arXiv:1710.05421v2 fatcat:jhapbwfoazcrldkwkipqfn7jru
« Previous Showing results 1 — 15 out of 757 results