Filters








177 Hits in 7.6 sec

Deep Learning Anthropomorphic 3D Point Clouds from a Single Depth Map Camera Viewpoint

Nolan Lunscher, John Zelek
2017 2017 IEEE International Conference on Computer Vision Workshops (ICCVW)  
We apply a deep learning approach to the domain of foot scanning, and present a method to reconstruct a 3D point cloud from a single input depth map.  ...  We train a view synthesis based network and show that our method can produce foot scans with accuracies of 1.55 mm from a single input depth map.  ...  Secondly, we are the first to apply deep learning to facilitate more efficient 3D scanning of anthropomorphic body parts; i.e., we learn other 3D viewpoints from a single viewpoint and thus are subsequently  ... 
doi:10.1109/iccvw.2017.87 dblp:conf/iccvw/LunscherZ17 fatcat:ohbept76mjerxlaejnhczd2gsq

Deep Learning Whole Body Point Cloud Scans from a Single Depth Map

Nolan Lunscher, John Zelek
2018 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)  
In this paper we demonstrate that by leveraging deep learning it is possible to create very simple whole body scanners that only require a single input depth map to operate.  ...  Deep learning models have emerged as the leading method of tackling visual tasks, including various aspects of 3D reconstruction.  ...  Methods In order to produce a completed point cloud scan from only a single input depth map, we leverage the ideas of deep learning view synthesis.  ... 
doi:10.1109/cvprw.2018.00157 dblp:conf/cvpr/LunscherZ18 fatcat:dlwczxxio5b2dbwalouzuioddu

Point Cloud Completion of Foot Shape from a Single Depth Map for Fit Matching Using Deep Learning View Synthesis

John Zelek, Nolan Lunscher
2017 2017 IEEE International Conference on Computer Vision Workshops (ICCVW)  
We use a deep learning approach to allow for whole foot shape reconstruction from a single input depth map view by synthesizing a view containing the remaining information about the foot not seen from  ...  Ideally, in order to reduce the cost and complexity of scanning systems as much as possible, only a single image from a single camera would be needed.  ...  We follow a deep learning view synthesis approach to capture full anthropomorphic body part shape from a single depth map input viewpoint.  ... 
doi:10.1109/iccvw.2017.271 dblp:conf/iccvw/ZelekL17 fatcat:ydudt4uiiff3bbriun3brt5xfe

Deep Learning Approaches to Grasp Synthesis: A Review [article]

Rhys Newbury, Morris Gu, Lachlan Chumbley, Arsalan Mousavian, Clemens Eppner, Jürgen Leitner, Jeannette Bohg, Antonio Morales, Tamim Asfour, Danica Kragic, Dieter Fox, Akansel Cosgun
2022 arXiv   pre-print
Grasping is the process of picking an object by applying forces and torques at a set of contacts. Recent advances in deep-learning methods have allowed rapid progress in robotic object grasping.  ...  Furthermore, we found two 'supporting methods' around grasping that use deep-learning to support the grasping process, shape approximation, and affordances.  ...  Munoz [62] and Kasaei and Kasaei [60] both generate multiple views of the object from virtual cameras using a captured point cloud from a single viewpoint.  ... 
arXiv:2207.02556v1 fatcat:xw77crimuzhu5dvxpoiizjhpx4

Dense Object Nets: Learning Dense Visual Object Descriptors By and For Robotic Manipulation [article]

Peter R. Florence, Lucas Manuelli, Russ Tedrake
2018 arXiv   pre-print
) is generally applicable to both rigid and non-rigid objects, (iii) takes advantage of the strong priors provided by 3D vision, and (iv) is entirely learned from self-supervision.  ...  We would like robots to visually perceive scenes and learn an understanding of the objects in them that (i) is task-agnostic and can be used as a building block for a variety of manipulation tasks, (ii  ...  To achieve this we first use the depth images and camera poses to fuse a point cloud of the scene. We then randomly sample many grasps on the point cloud and prune those that are in collision.  ... 
arXiv:1806.08756v2 fatcat:fa7265v2vvbfznijifddccbc5i

3DPeople: Modeling the Geometry of Dressed Humans

Albert Pumarola, Jordi Sanchez, Gary P. T. Choi, Alberto Sanfeliu, Francesc Moreno
2019 2019 IEEE/CVF International Conference on Computer Vision (ICCV)  
In this paper, we present an approach to model dressed humans and predict their geometry from single images.  ...  Besides providing textured 3D meshes for clothes and body we annotated the dataset with segmentation masks, skeletons, depth, normal maps and optical flow.  ...  Introduction With the advent of deep learning, the problem of predicting the geometry of the human body from single images has experienced a tremendous boost.  ... 
doi:10.1109/iccv.2019.00233 dblp:conf/iccv/PumarolaSCSM19 fatcat:oohkwoeexna2tky5nbjfjmrfai

3DPeople: Modeling the Geometry of Dressed Humans [article]

Albert Pumarola and Jordi Sanchez and Gary P. T. Choi and Alberto Sanfeliu and Francesc Moreno-Noguer
2019 arXiv   pre-print
Besides providing textured 3D meshes for clothes and body, we annotate the dataset with segmentation masks, skeletons, depth, normal maps and optical flow.  ...  In this paper, we present an approach to model dressed humans and predict their geometry from single images.  ...  Introduction With the advent of deep learning, the problem of predicting the geometry of the human body from single images has experienced a tremendous boost.  ... 
arXiv:1904.04571v1 fatcat:ce6br7a7ejfy5heujciawrrmtu

CHARMIE: A Collaborative Healthcare and Home Service and Assistant Robot for Elderly Care

Tiago Ribeiro, Fernando Gonçalves, Inês S. Garcia, Gil Lopes, António F. Ribeiro
2021 Applied Sciences  
number of single-person households.  ...  CHARMIE is an anthropomorphic collaborative healthcare and domestic assistant robot capable of performing generic service tasks in non-standardised healthcare and domestic environment settings.  ...  Appendix A. Video Link Link to the MinhoTeam qualification video for RoboCup@Home 2017 in Nagoya, Japan.  ... 
doi:10.3390/app11167248 fatcat:y62w3jernzfc3i2pw64ev7smsy

NeuralGrasps: Learning Implicit Representations for Grasps of Multiple Robotic Hands [article]

Ninad Khargonkar, Neil Song, Zesheng Xu, Balakrishnan Prabhakaran, Yu Xiang
2022 arXiv   pre-print
Each latent vector is learned to decode to the 3D shape of an object and the 3D shape of a robotic hand in a grasping pose in terms of the signed distance functions of the two 3D shapes.  ...  learn grasping skills from humans.  ...  They cannot work with partial observations of objects, e.g., point clouds from RGB-D cameras.  ... 
arXiv:2207.02959v1 fatcat:swecyaqba5aljls5pchoxdxcea

Resolving 3D Human Pose Ambiguities with 3D Scene Constraints [article]

Mohamed Hassan, Vasileios Choutas, Dimitrios Tzionas, Michael J. Black
2019 arXiv   pre-print
Our key contribution is to exploit static 3D scene structure to better estimate human pose from monocular images. The method enforces Proximal Relationships with Object eXclusion and is called PROX.  ...  To test this, we collect a new dataset composed of 12 different 3D scenes and RGB sequences of 20 subjects moving in and interacting with the scenes.  ...  There are now good methods to infer 3D depth maps from a single image [15] , as well as methods that do more semantic analysis and estimate 3D CAD models of the objects in the scene [45] .  ... 
arXiv:1908.06963v1 fatcat:g7ex2txu3faw3fie54jqu4fdry

A Mobile Robot Hand-Arm Teleoperation System by Vision and IMU [article]

Shuang Li, Jiaxi Jiang, Philipp Ruppel, Hongzhuo Liang, Xiaojian Ma, Norman Hendrich, Fuchun Sun, Jianwei Zhang
2020 arXiv   pre-print
Transteleop observes the human hand through a low-cost depth camera and generates not only joint angles but also depth images of paired robot hand poses through an image-to-image translation process.  ...  A wearable camera holder enables simultaneous hand-arm control and facilitates the mobility of the whole teleoperation system.  ...  ACKNOWLEDGMENT This research was funded jointly by the German Research Foundation (DFG) and the National Science Foundation of China (NSFC) in project Cross Modal Learning, NSFC 61621136008/DFG TRR-169  ... 
arXiv:2003.05212v1 fatcat:vf2auxtcxzhdfjwwpaiaabpqmq

Adversarial Inverse Graphics Networks: Learning 2D-to-3D Lifting and Image-to-Image Translation from Unpaired Supervision [article]

Hsiao-Yu Fish Tung, Adam W. Harley, William Seto, Katerina Fragkiadaki
2017 arXiv   pre-print
Learning such mappings from unlabelled data, or improving upon supervised models by exploiting unlabelled data, remains elusive.  ...  Researchers have developed excellent feed-forward models that learn to map images to desired outputs, such as to the images' latent factors, or to other images, using supervised learning.  ...  Structure from Motion Simultaneous Localization And Mapping (SLAM) methods have shown impressive results on estimating camera pose and 3D point clouds from monocular, stereo, or RGB-D video sequences  ... 
arXiv:1705.11166v3 fatcat:xe4mr3ajqbd35c23xyiyxac3ym

SMPLpix: Neural Avatars from 3D Human Models [article]

Sergey Prokudin, Michael J. Black, Javier Romero
2020 arXiv   pre-print
incorporated into deep learning frameworks.  ...  We train a network that directly converts a sparse set of 3D mesh vertices into photorealistic images, alleviating the need for traditional rasterization mechanism.  ...  Rendering from deep 3D descriptors. Another promising direction for geometry-aware image synthesis aims to learn some form of deep 3D descriptors from a 2D or 3D inputs [5, 33, 48, 49] .  ... 
arXiv:2008.06872v2 fatcat:gk5z5p3h7jhetkgjnlu7mq74eu

The State of Lifelong Learning in Service Robots:

S. Hamidreza Kasaei, Jorik Melsen, Floris van Beers, Christiaan Steenkist, Klemen Voncina
2021 Journal of Intelligent and Robotic Systems  
Therefore, apart from batch learning, the robot should be able to continually learn about new object categories and grasp affordances from very few training examples on-site.  ...  In such environments, no matter how extensive the training data used for batch learning, a robot will always face new objects.  ...  grippers -Both are mobile wheeled robots -Able to learn new tasks from instructions on the internet TORO [35] -3D vision based on a stereo camera and depth sensor -Two manipulators -Two anthropomorphic  ... 
doi:10.1007/s10846-021-01458-3 fatcat:eeunivdvmrcg3piyx3r3pdsviq

Table of Contents

2021 IEEE Robotics and Automation Letters  
Zhang 2052 Deep Compression for Dense Point Cloud Maps . . . . . . . . . L. Wiesmann, A. Milioto, X. Chen, C. Stachniss, and J.  ...  Mouaddib 2264 ERASOR: Egocentric Ratio of Pseudo Occupancy-Based Dynamic Object Removal for Static 3D Point Cloud Map Building .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  ... 
doi:10.1109/lra.2021.3072707 fatcat:qyphyzqxfrgg7dxdol4qamrdqu
« Previous Showing results 1 — 15 out of 177 results