Filters








20 Hits in 1.7 sec

MonoPerfCap: Human Performance Capture from Monocular Video [article]

Weipeng Xu, Avishek Chatterjee, Michael Zollhöfer, Helge Rhodin, Dushyant Mehta, Hans-Peter Seidel, Christian Theobalt
2018 arXiv   pre-print
We present the first marker-less approach for temporally coherent 3D performance capture of a human with general clothing from monocular video.  ...  We demonstrate state-of-the-art performance capture results that enable exciting applications such as video editing and free viewpoint video, previously infeasible from monocular video.  ...  We present the first marker-less approach for temporally coherent 3D performance capture of a human with general clothing from monocular video.  ... 
arXiv:1708.02136v2 fatcat:toewmmbynnbppmxsop43d4xk3e

LiveCap: Real-time Human Performance Capture from Monocular Video [article]

Marc Habermann and Weipeng Xu and Michael Zollhoefer and Gerard Pons-Moll and Christian Theobalt
2019 arXiv   pre-print
We present the first real-time human performance capture approach that reconstructs dense, space-time coherent deforming geometry of entire humans in general everyday clothing from just a single RGB video  ...  Our method is the first real-time monocular approach for full-body performance capture.  ...  Capturing 3D non-rigid deformations from monocular video is very hard.  ... 
arXiv:1810.02648v3 fatcat:ijfxnvdczzc6jh2gpywnmde47y

EventCap: Monocular 3D Capture of High-Speed Human Motions using an Event Camera [article]

Lan Xu, Weipeng Xu, Vladislav Golyanik, Marc Habermann, Lu Fang and Christian Theobalt
2019 arXiv   pre-print
The high frame rate is a critical requirement for capturing fast human motions.  ...  As a result, we can capture fast motions at millisecond resolution with significantly higher data efficiency than using high frame rate videos.  ...  Figure 1 : We present the first monocular event-based 3D human motion capture approach.  ... 
arXiv:1908.11505v1 fatcat:wr36edgfuncpxkl3spr4pbiqcu

EventCap: Monocular 3D Capture of High-Speed Human Motions Using an Event Camera

Lan Xu, Weipeng Xu, Vladislav Golyanik, Marc Habermann, Lu Fang, Christian Theobalt
2020 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)  
The high frame rate is a critical requirement for capturing fast human motions.  ...  As a result, we can capture fast motions at millisecond resolution with significantly higher data efficiency than using high frame rate videos.  ...  Figure 1 : We present the first monocular event-based 3D human motion capture approach.  ... 
doi:10.1109/cvpr42600.2020.00502 dblp:conf/cvpr/XuXGHFT20 fatcat:v3j6xqdmcjei5gd6tm7mov7kje

MonoClothCap: Towards Temporally Coherent Clothing Capture from Monocular RGB Video [article]

Donglai Xiang, Fabian Prada, Chenglei Wu, Jessica Hodgins
2020 arXiv   pre-print
We present a method to capture temporally coherent dynamic clothing deformation from a monocular RGB video input.  ...  Our method produces temporally coherent reconstruction of body and clothing from monocular video. We demonstrate successful clothing capture results from a variety of challenging videos.  ...  By contrast, in this paper, we address the challenging problem of capturing clothing dynamics from a monocular video. Monocular Human Performance Capture.  ... 
arXiv:2009.10711v2 fatcat:qctat6zyxvaetkwrqpkm2hjci4

A-NeRF: Articulated Neural Radiance Fields for Learning Human Shape, Appearance, and Pose [article]

Shih-Yang Su, Frank Yu, Michael Zollhoefer, Helge Rhodin
2021 arXiv   pre-print
We propose a method to learn a generative neural body model from unlabelled monocular videos by extending Neural Radiance Fields (NeRFs).  ...  This enables learning volumetric body shape and appearance from scratch while jointly refining the articulated pose; all without ground truth labels for appearance, pose, or 3D shape on the input videos  ...  The number of frames per subject range from 276 to 603. • MonoPerfCap [74] The dataset consists of human performance video captured with a monocular camera in both indoor and outdoor settings.  ... 
arXiv:2102.06199v3 fatcat:usxlnp6wfrazdp66xrsvfqmdne

A Deeper Look into DeepCap [article]

Marc Habermann, Weipeng Xu, Michael Zollhoefer, Gerard Pons-Moll, Christian Theobalt
2021 arXiv   pre-print
We propose a novel deep learning approach for monocular dense human performance capture.  ...  Human performance capture is a highly important computer vision problem with many applications in movie production and virtual/augmented reality.  ...  ACKNOWLEDGMENT This work was funded by the ERC Consolidator Grant 4DRepLy (770784) and the Deutsche Forschungsgemeinschaft (Project Nr. 409792180, Emmy Noether Programme, project: Real Virtual Humans).  ... 
arXiv:2111.10563v1 fatcat:pkwf5736rje43ihu7pmkjjggly

SelfRecon: Self Reconstruction Your Digital Avatar from Monocular Video [article]

Boyi Jiang, Yang Hong, Hujun Bao, Juyong Zhang
2022 arXiv   pre-print
We propose SelfRecon, a clothed human body reconstruction method that combines implicit and explicit representations to recover space-time coherent geometries from a monocular self-rotating human video  ...  Extensive experimental results demonstrate its effectiveness on real captured monocular videos. The source code is available at https://github.com/jby1993/SelfReconCode.  ...  human performance capture approaches [17, 18, 52] are mainly designed based on explicit mesh representation.  ... 
arXiv:2201.12792v2 fatcat:ocubpi7ug5glxonubsl7crluhe

Contact and Human Dynamics from Monocular Video [article]

Davis Rempe, Leonidas J. Guibas, Aaron Hertzmann, Bryan Russell, Ruben Villegas, Jimei Yang
2020 arXiv   pre-print
In this paper, we present a physics-based method for inferring 3D human motion from video sequences that takes initial 2D and 3D pose estimates as input.  ...  Existing deep models predict 2D and 3D kinematic poses from video that are approximately accurate, but contain visible errors that violate physical constraints, such as feet penetrating the ground and  ...  This work was in part supported by NSF grant IIS-1763268, grants from the Samsung GRO program and the Stanford SAIL Toyota Research Center, and a gift from Adobe Corporation.  ... 
arXiv:2007.11678v2 fatcat:5z6bxjz2szgdzkoiee5cwdj3ea

Tex2Shape: Detailed Full Human Body Geometry From a Single Image

Thiemo Alldieck, Gerard Pons-Moll, Christian Theobalt, Marcus Magnor
2019 2019 IEEE/CVF International Conference on Computer Vision (ICCV)  
Figure 1 : We present an image-to-image translation model for detailed full human body geometry reconstruction from a single image.  ...  Abstract We present a simple yet effective method to infer detailed full human body shape from only a single photograph.  ...  This is work is partly funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) -409792180 (Emmy Noether Programme, project: Real Virtual Humans) and project MA2555/12-1.  ... 
doi:10.1109/iccv.2019.00238 dblp:conf/iccv/AlldieckPTM19 fatcat:zllzubdigfgnvf3tfjk7tlaj2y

Tex2Shape: Detailed Full Human Body Geometry From a Single Image [article]

Thiemo Alldieck, Gerard Pons-Moll, Christian Theobalt, Marcus Magnor
2019 arXiv   pre-print
We present a simple yet effective method to infer detailed full human body shape from only a single photograph.  ...  The input to our method is a partial texture map of the visible region obtained from off-the-shelf methods.  ...  This is work is partly funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) -409792180 (Emmy Noether Programme, project: Real Virtual Humans) and project MA2555/12-1.  ... 
arXiv:1904.08645v2 fatcat:u3uyayp3abdatjezilsa6mxnjm

MVP-Human Dataset for 3D Human Avatar Reconstruction from Unconstrained Frames [article]

Xiangyu Zhu, Tingting Liao, Jiangjing Lyu, Xiang Yan, Yunfeng Wang, Kan Guo, Qiong Cao, Stan Z. Li, Zhen Lei
2022 arXiv   pre-print
In this paper, we consider a novel problem of reconstructing a 3D human avatar from multiple unconstrained frames, independent of assumptions on camera calibration, capture space, and constrained actions  ...  Overall, benefits from the specific network architecture and the diverse data, the trained model enables 3D avatar reconstruction from unconstrained frames and achieves state-of-the-art performance.  ...  Recently, some methods try to recover a clothed 3D avatar from a monocular video in which a person is moving.  ... 
arXiv:2204.11184v1 fatcat:etohs6biivcsvjgzwwrjbxj4qi

Structure from Articulated Motion: An Accurate and Stable Monocular 3D Reconstruction Approach without Training Data [article]

Onorina Kovalenko and Vladislav Golyanik and Jameel Malik and Ahmed Elhayek and Didier Stricker
2019 arXiv   pre-print
In this paper, we introduce a new model-based method called Structure from Articulated Motion (SfAM).  ...  It achieves state-of-the-art accuracy and scales across different scenarios which is shown in extensive experiments on public benchmarks and real video sequences.  ...  SfAM achieves the most consistent performance among all compared algorithms. Real-World Videos Our algorithm is capable of recovering human motion from challenging real-world videos.  ... 
arXiv:1905.04789v1 fatcat:o4k2h3spzbcfxdtjesbvz6m52i

Structure from Articulated Motion: Accurate and Stable Monocular 3D Reconstruction without Training Data

Onorina Kovalenko, Vladislav Golyanik, Jameel Malik, Ahmed Elhayek, Stricker
2019 Sensors  
We believe that it brings a new perspective on the domain of monocular 3D recovery of articulated structures, including human motion capture.  ...  At the same time, it performs on par with learning-based state-of-the-art approaches on public benchmarks and outperforms previous non-rigid structure from motion (NRSfM) methods.  ...  SfAM achieves the most consistent performance among all compared algorithms. Real-World Videos Our algorithm is capable of recovering human motion from challenging real-world videos.  ... 
doi:10.3390/s19204603 fatcat:7ul4fn7hu5babl6hhi5wipyo24

ChallenCap: Monocular 3D Capture of Challenging Human Performances using Multi-Modal References [article]

Yannan He, Anqi Pang, Xin Chen, Han Liang, Minye Wu, Yuexin Ma, Lan Xu
2021
Capturing challenging human motions is critical for numerous applications, but it suffers from complex motion patterns and severe self-occlusion under the monocular setting.  ...  Extensive experiments on our new challenging motion dataset demonstrate the effectiveness and robustness of our approach to capture challenging human motions.  ...  Recent learning-based techniques enables robust human attribute prediction from monocular RGB video [31, 35, 2, 80, 55] .  ... 
doi:10.48550/arxiv.2103.06747 fatcat:fj4riahq2ndudpxytxbe677zte
« Previous Showing results 1 — 15 out of 20 results