Filters








317 Hits in 3.3 sec

3D scanning deformable objects with a single RGBD sensor

Mingsong Dou, Jonathan Taylor, Henry Fuchs, Andrew Fitzgibbon, Shahram Izadi
2015 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)  
We present a 3D scanning system for deformable objects that uses only a single Kinect sensor.  ...  We do not rely on any prior shape knowledge, enabling general object scanning with freeform deformations.  ...  Comparison with 3D Self-portraits 3D self-portraits [11] is among the first systems with the capability to scan a dynamic object with a single consumer sensor.  ... 
doi:10.1109/cvpr.2015.7298647 dblp:conf/cvpr/DouTFFI15 fatcat:7jlql4dphvf2fl6j2omlejzqfq

Robust 3D Self-portraits in Seconds [article]

Zhe Li, Tao Yu, Chuanyu Pan, Zerong Zheng, Yebin Liu
2020 arXiv   pre-print
In this paper, we propose an efficient method for robust 3D self-portraits using a single RGBD camera.  ...  Finally, a lightweight bundle adjustment algorithm is proposed to guarantee that all the partial scans can not only "loop" with each other but also remain consistent with the selected live key observations  ...  self-portraits using a single RGBD sensor.  ... 
arXiv:2004.02460v1 fatcat:jjl2cuxor5g2na4b254j7svs3u

Function4D: Real-time Human Volumetric Capture from Very Sparse Consumer RGBD Sensors [article]

Tao Yu, Zerong Zheng, Kaiwen Guo, Pengpeng Liu, Qionghai Dai, Yebin Liu
2021 arXiv   pre-print
Human volumetric capture is a long-standing topic in computer vision and computer graphics.  ...  To achieve high-quality and temporal-continuous reconstruction, we propose dynamic sliding fusion to fuse neighboring depth observations together with topology consistency.  ...  Recently, capturing 3D dense human body deformation with coarse-to-fine registration from a single RGB camera has been enabled [53] and improved for realtime performance [17] .  ... 
arXiv:2105.01859v2 fatcat:e7atvbmzqrbs3jbhyflkdpscdq

LabelFusion: A Pipeline for Generating Ground Truth Labels for Real RGBD Data of Cluttered Scenes [article]

Pat Marion, Peter R. Florence, Lucas Manuelli, Russ Tedrake
2017 arXiv   pre-print
In this paper we develop a pipeline to rapidly generate high quality RGBD data with pixelwise labels and object poses.  ...  We label the 3D reconstruction using a human assisted ICP-fitting of object meshes. By reprojecting the results of labeling the 3D scene we can produce labels for each RGBD image of the scene.  ...  We also thank Allison Fastman and Sammy Creasey of Toyota Research Institute for their help with hardware, including object scanning and robot arm automation.  ... 
arXiv:1707.04796v3 fatcat:frl7kt77rjdotlmvtrzg66mw2a

SparseFusion: Dynamic Human Avatar Modeling from Sparse RGBD Images [article]

Xinxin Zuo and Sen Wang and Jiangbin Zheng and Weiwei Yu and Minglun Gong and Ruigang Yang and Li Cheng
2020 arXiv   pre-print
In this paper, we propose a novel approach to reconstruct 3D human body shapes based on a sparse set of RGBD frames using a single RGBD camera.  ...  which partial results from RGBD frames are collected into a unified 3D shape, under the guidance of correspondences from the pairwise alignment; Finally, the texture map of the reconstructed human model  ...  The problem of recovering 3D models of deformable objects from a single depth camera has recently been studied.  ... 
arXiv:2006.03630v1 fatcat:afganymonzhx5i4i6ttmi5uw6a

A comparative study of breast surface reconstruction for aesthetic outcome assessment [article]

Rene Lacher, Francisco Vasconcelos, David Bishop, Norman Williams, Mohammed Keshtgar, David Hawkes, John Hipwell, Danail Stoyanov
2017 arXiv   pre-print
This paper aims at comparing the accuracy of low-cost 3D scanning technologies with the significantly more expensive state-of-the-art 3D commercial scanners in the context of breast 3D reconstruction.  ...  scanning and reconstruction techniques offer a flexible tool for building detailed and accurate 3D breast models that can be used both pre-operatively for surgical planning and post-operatively for aesthetic  ...  We compare 3D breast reconstruction using two high-precision scanning solutions, a structured-light handheld Artec Eva scanner for the phantom and a single shot 3dMD stereophotogrammetry system for patients  ... 
arXiv:1706.06531v1 fatcat:stmeyetqmzcbjebgba477jqo4y

A Comparative Study of Breast Surface Reconstruction for Aesthetic Outcome Assessment [chapter]

René M. Lacher, Francisco Vasconcelos, David C. Bishop, Norman R. Williams, Mohammed Keshtgar, David J. Hawkes, John H. Hipwell, Danail Stoyanov
2017 Lecture Notes in Computer Science  
This paper aims at comparing the accuracy of low-cost 3D scanning technologies with the significantly more expensive state-ofthe-art 3D commercial scanners in the context of breast 3D reconstruction.  ...  scanning and reconstruction techniques offer a flexible tool for building detailed and accurate 3D breast models that can be used both pre-operatively for surgical planning and post-operatively for aesthetic  ...  We compare 3D breast reconstruction using two high-precision scanning solutions, a structured-light handheld Artec Eva scanner for the phantom and a single shot 3dMD stereophotogrammetry system for patients  ... 
doi:10.1007/978-3-319-66185-8_58 fatcat:mpmvjnq2sfhythd66uvn22s5ci

FlyCap: Markerless Motion Capture Using Multiple Autonomous Flying Cameras [article]

Lan Xu, Lu Fang, Wei Cheng, Kaiwen Guo, Guyue Zhou, Qionghai Dai, and Yebin Liu
2016 arXiv   pre-print
using multiple autonomous flying cameras (autonomous unmanned aerial vehicles(UAV) each integrated with an RGBD video camera).  ...  We leverage the using of visual-odometry information provided by the UAV platform, and formulate the surface tracking problem in a non-linear objective function that can be linearized and effectively minimized  ...  Note that UAV1 is equipped with 2 RGBD sensors, while UAV2 and UAV3 only have a single RGBD sensor. (c) The occupancy.  ... 
arXiv:1610.09534v3 fatcat:mxtzspgpt5bepld2pfv4hz73fm

RGBD Datasets: Past, Present and Future [article]

Michael Firman
2016 arXiv   pre-print
Finally, we examine the future of RGBD datasets.  ...  In this paper we explore the field, reviewing datasets across eight categories: semantics, object pose estimation, camera tracking, scene reconstruction, object tracking, human actions, faces and identification  ...  A big thanks also goes out to everyone who has released their datasets. Keep them coming!  ... 
arXiv:1604.00999v2 fatcat:mwr4g7y7trhspclecq7ftrftxu

RGBD Datasets: Past, Present and Future

Michael Firman
2016 2016 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)  
Finally, we examine the future of RGBD datasets.  ...  In this paper we explore the field, reviewing datasets across eight categories: semantics, object pose estimation, camera tracking, scene reconstruction, object tracking, human actions, faces and identification  ...  A big thanks also goes out to everyone who has released their datasets. Keep them coming!  ... 
doi:10.1109/cvprw.2016.88 dblp:conf/cvpr/Firman16 fatcat:l5ygpbegobgffkdpotbkdbisqu

Highly accurate 3D surface models by sparse surface adjustment

Michael Ruhnke, Rainer Kummerle, Giorgio Grisetti, Wolfram Burgard
2012 2012 IEEE International Conference on Robotics and Automation  
The key idea of our method is to jointly optimize the poses of the sensor and the positions of the surface points measured with a range scanning device.  ...  We present our approach and evaluate it on data recorded in different real world environments with a RGBD camera and a laser range scanner.  ...  Sensor Model for RGBD Cameras A detailed description of a laser sensor model can be obtained by an appropriate extension of the model proposed in our previous work [18] to 3D.  ... 
doi:10.1109/icra.2012.6225077 dblp:conf/icra/RuhnkeKGB12 fatcat:jsto2yyr7zdmzmzj4tuijrkkfa

Estimation of human body shape and cloth field in front of a kinect

Ming Zeng, Liujuan Cao, Huailin Dong, Kunhui Lin, Meihong Wang, Jing Tong
2015 Neurocomputing  
Given this RGBD data, the initial pose, and the skin constraint, we estimate 80 the shape and pose parameters of a statistical human model (SCAPE [15]), which results in a estimated mesh X of the user's  ...  To account for the clothes, we take a non-rigid deformation scheme to deform the estimated mesh X to fit the captured depth data, leading to a dressed mesh X ′ . 85 At the final step, we subtract X from  ...  In this stage, depth sensors capture scans of a human turning round before the sensors. During the capture, the human is asked to roughly keep a standard pose.  ... 
doi:10.1016/j.neucom.2014.06.087 fatcat:ppouqqj4frh2jgixsoqoj627km

Deep Textured 3D Reconstruction of Human Bodies [article]

Abbhinav Venkat, Sai Sagar Jinka, Avinash Sharma
2018 arXiv   pre-print
We propose to co-learn the depth information readily available with affordable RGBD sensors (e.g., Kinect) while showing multiple views of the same object during the training phase.  ...  In this paper, we propose a deep learning based solution for textured 3D reconstruction of human body shapes from a single view RGB image.  ...  DynamicFusion [11] proposed the first dense SLAM system capable of reconstructing non-rigidly deforming scenes in real-time, by fusing together RGBD scans captured from commodity sensors.  ... 
arXiv:1809.06547v1 fatcat:cysa6kojvrd6zeqvxcmwf3ntva

Pose-aware C-arm for automatic re-initialization of interventional 2D/3D image registration

Javad Fotouhi, Bernhard Fuerst, Alex Johnson, Sing Chun Lee, Russell Taylor, Greg Osgood, Nassir Navab, Mehran Armand
2017 International Journal of Computer Assisted Radiology and Surgery  
Purpose-In minimally invasive interventions assisted by C-arm imaging, there is a demand to fuse the intra-interventional 2D C-arm image with pre-interventional 3D patient data to enable surgical guidance  ...  A highly accurate multi-view calibration between RGBD and C-arm imaging devices is achieved using a custom-made multimodal calibration target.  ...  SIEMENS ARCADIS Orbic 3D available.  ... 
doi:10.1007/s11548-017-1611-8 pmid:28527025 pmcid:PMC5898215 fatcat:zlajbd3dx5hnbfzcpglkhezjce

KeyPose: Multi-View 3D Labeling and Keypoint Estimation for Transparent Objects [article]

Xingyu Liu, Rico Jonschkowski, Anelia Angelova, Kurt Konolige
2020 arXiv   pre-print
Many existing approaches to this problem require a depth map of the object for both training and prediction, which restricts them to opaque, lambertian objects that produce good returns in an RGBD sensor  ...  To evaluate the performance of our method, we create a dataset of 15 clear objects in five classes, with 48K 3D-keypoint labeled images.  ...  While some of these methods predict 3D keypoints from a single RGB image, others use RGBD data collected by a depth sensor [32, 18, 2] to achieve better accuracy.  ... 
arXiv:1912.02805v2 fatcat:ocknxyfefrcr5dfvh7nq3pdhsy
« Previous Showing results 1 — 15 out of 317 results