A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2018; you can also visit the original URL.
The file type is application/pdf
.
Filters
Fusion4D
2016
ACM Transactions on Graphics
motion and topology changes, allowing recon-67 struction of extremely challenging scenes; (3) We scale to multi-view 68 capture from multiple RGBD cameras, allowing for performance 69 capture at qualities ...
interact in a scene and 33 then expect hours of processing time before seeing the final result. 34 What if this processing could happen live in real-time directly as the 35 performance is happening? ...
Figure 2 : 2 work, Fusion4D, attempts to bring aspects inherent in multi-190 view performance capture systems to real-time scenarios. ...
doi:10.1145/2897824.2925969
fatcat:ysrm6f3zhngxfeexvm7u2sqme4
ArticulatedFusion: Real-Time Reconstruction of Motion, Geometry and Segmentation Using a Single Depth Camera
[chapter]
2018
Lecture Notes in Computer Science
This paper proposes a real-time dynamic scene reconstruction method capable of reproducing the motion, geometry, and segmentation simultaneously given live depth stream from a single RGB-D camera. ...
The optimization space of node motions and the range of physically-plausible deformations are largely reduced by taking advantage of the articulated motion prior, which is solved by an efficient node graph ...
We are grateful to Matthias Innmann for the help on comparison results of VolumeDeform, Tao Yu for providing their Vicon-based ground-truth marker data in BodyFusion, and Dimitrios Tzionas for providing ...
doi:10.1007/978-3-030-01237-3_20
fatcat:nka3vwn2jjdfnose5socd2bndu
A VR System for Immersive Teleoperation and Live Exploration with a Mobile Robot
[article]
2019
arXiv
pre-print
Here, a global 3D model of the already captured scene parts is reconstructed and streamed to the individual remote user clients where the rendering for e.g. head-mounted display devices (HMDs) is performed ...
While being operated through the scene, a robot captures RGB-D data that is streamed to a SLAM-based live multi-client telepresence system. ...
reality and real-time 3D scene capture (see Fig. 1 ). ...
arXiv:1908.02949v1
fatcat:nydhcf2pe5hnvhzf7bk4v6i6ei
A low-cost, practical acquisition and rendering pipeline for real-time free-viewpoint video communication
2020
The Visual Computer
We present a semiautomatic real-time pipeline for capturing and rendering free-viewpoint video using passive stereo matching. ...
The pipeline is simple and achieves agreeable quality in real time on a system of commodity web cameras and a single desktop computer. ...
He is co-author of the book Real-Time Shadows." Fig. 2 2 An overview of our real-time pipeline. ...
doi:10.1007/s00371-020-01823-7
fatcat:yx2jci3asjcp3k6tfjlvfhygem
High-quality indoor scene 3D reconstruction with RGB-D cameras: A brief review
2022
Computational Visual Media
The performance of state-of-the-art methods is also compared and analyzed. ...
The advent of consumer RGB-D cameras has made a profound advance in indoor scene reconstruction. ...
Acknowledgements
Declaration of competing interest The authors have no competing interests to declare that are relevant to the content of this article. ...
doi:10.1007/s41095-021-0250-8
fatcat:z6ywcn4zujbptjaqrwkwhoyquu
DoubleFusion: Real-Time Capture of Human Performances with Inner Body Shapes from a Single Depth Sensor
2018
2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
Overall, our method enables increasingly denoised, detailed and complete surface reconstructions, fast motion tracking performance and plausible inner body shape reconstruction in real-time. ...
In particular, experiments show improved fast motion tracking and loop closure performance on more challenging scenarios. ...
Fusion4D [11] setup a rig with 8 depth camera to capture dynamic scenes with challenging motions in realtime. ...
doi:10.1109/cvpr.2018.00761
dblp:conf/cvpr/YuZGZDLPL18
fatcat:udcyrt6tqnewffcxofiskcpn3e
DoubleFusion: Real-time Capture of Human Performances with Inner Body Shapes from a Single Depth Sensor
[article]
2018
arXiv
pre-print
Overall, our method enables increasingly denoised, detailed and complete surface reconstructions, fast motion tracking performance and plausible inner body shape reconstruction in real-time. ...
In particular, experiments show improved fast motion tracking and loop closure performance on more challenging scenarios. ...
Fusion4D [11] setup a rig with 8 depth camera to capture dynamic scenes with challenging motions in realtime. ...
arXiv:1804.06023v1
fatcat:qhxsifv575h5lcc7of6l4h2myq
ArticulatedFusion: Real-time Reconstruction of Motion, Geometry and Segmentation Using a Single Depth Camera
[article]
2018
arXiv
pre-print
This paper proposes a real-time dynamic scene reconstruction method capable of reproducing the motion, geometry, and segmentation simultaneously given live depth stream from a single RGB-D camera. ...
The optimization space of node motions and the range of physically-plausible deformations are largely reduced by taking advantage of the articulated motion prior, which is solved by an efficient node graph ...
We are grateful to Matthias Innmann for the help on comparison results of VolumeDeform, Tao Yu for providing their Vicon-based ground-truth marker data in BodyFusion, and Dimitrios Tzionas for providing ...
arXiv:1807.07243v1
fatcat:zoxzmuandbfzto4vlv2xvsh6ye
SLAMBench 3.0: Systematic Automated Reproducible Evaluation of SLAM Systems for Robot Vision Challenges and Scene Understanding
2019
2019 International Conference on Robotics and Automation (ICRA)
This new version of SLAMBench moves beyond traditional visual SLAM, and provides new support for scene understanding and non-rigid environments (dynamic SLAM). ...
In addition, we include two SLAM systems (one dense, one sparse) augmented with convolutional neural networks for scene understanding, together with datasets and appropriate metrics. ...
Recent developments include BundleFusion [10] , which performs on-the-fly surface reintegration in real-time. ...
doi:10.1109/icra.2019.8794369
dblp:conf/icra/BujancaGSNBODKR19
fatcat:vrvsh7pxxjg4takh2egvjgppxu
FullFusion: A Framework for Semantic Reconstruction of Dynamic Scenes
2019
2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW)
Mobile robots operating in a variety of environments in real-life scenarios require an advanced level of understanding of their surroundings. ...
Our method is the first to perform semantic reconstruction of non-rigidly deforming objects along with a static background. ...
attain excellent results in performing both tasks in real-time. ...
doi:10.1109/iccvw.2019.00272
dblp:conf/iccvw/BujancaLL19
fatcat:uqnzvmidpjgevgb7kslbjlt46i
Efficient 3D Reconstruction and Streaming for Group-Scale Multi-Client Live Telepresence
[article]
2019
arXiv
pre-print
Whereas impressive telepresence systems have been proposed on top of on-the-fly scene capture, data transmission and visualization, these systems are restricted to the immersion of single or up to a low ...
We demonstrate that our optimized system is capable of generating high-quality scene reconstructions as well as providing an immersive viewing experience to a large group of people within these live-captured ...
based on the Fusion4D system [3] as well as real-time data transmission. ...
arXiv:1908.03118v1
fatcat:xa6x63xzsjcwhi3ondwcxuh6ee
SimulCap : Single-View Human Performance Capture with Cloth Simulation
[article]
2019
arXiv
pre-print
Our main contributions are: (i) a multi-layer representation of garments and body, and (ii) a physics-based performance capture procedure. ...
For performance capture, we perform skeleton tracking, cloth simulation, and iterative depth fitting sequentially for the incoming frame. ...
Fusion4D [16] and Motion2Fusion [15] set up a rig with several RGBD cameras to capture dynamic scenes with challenging motions in real-time. ...
arXiv:1903.06323v2
fatcat:q4lospsevvfl7pttfpvx3ocu7y
SLAMCast: Large-Scale, Real-Time 3D Reconstruction and Streaming for Immersive Multi-Client Live Telepresence
2019
IEEE Transactions on Visualization and Computer Graphics
Here, we introduce what we believe is the first practical client-server system for real-time capture and many-user exploration of static 3D scenes. ...
Real-time 3D scene reconstruction from RGB-D sensor data, as well as the exploration of such data in VR/AR settings, has seen tremendous progress in recent years. ...
Here, we introduce what we believe is the rst practical client-server system for real-time capture and many-user exploration of static 3D scenes. ...
doi:10.1109/tvcg.2019.2899231
pmid:30794183
fatcat:uag36bqzyzehljvcdnnvlevcge
NRMVS: Non-Rigid Multi-View Stereo
[article]
2019
arXiv
pre-print
In this paper, we open up a new challenging direction: dense 3D reconstruction of scenes with non-rigid changes observed from arbitrary, sparse, and wide-baseline views. ...
The static scene assumption, however, limits the general applicability of MVS algorithms, as many day-to-day scenes undergo non-rigid motion, e.g., clothes, faces, or human bodies. ...
We also captured several real-world scenes containing deforming surfaces from different views at different times. ...
arXiv:1901.03910v1
fatcat:tg4vchqqyjhsjlebnk6qwuw2w4
KillingFusion: Non-rigid 3D Reconstruction without Correspondences
2017
2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
We introduce a geometry-driven approach for real-time 3D reconstruction of deforming surfaces from a single RGB-D stream without any templates or shape priors. ...
Given a pair of signed distance fields (SDFs) representing the shapes of interest, we estimate a dense deformation field that aligns them. ...
Baust acknowledges the support of DFG-funded Collaborative Research Centre SFB824-Z2 Imaging for Selection, Monitoring and Individualization of Cancer Therapies. ...
doi:10.1109/cvpr.2017.581
dblp:conf/cvpr/SlavchevaBCI17
fatcat:kh6imn6qlrefxljzfjwroa5tfi
« Previous
Showing results 1 — 15 out of 33 results