Filters








9 Hits in 1.6 sec

LookinGood: Enhancing Performance Capture with Real-time Neural Re-Rendering [article]

Ricardo Martin-Brualla, Rohit Pandey, Shuoran Yang, Pavel Pidlypenskyi, Jonathan Taylor, Julien Valentin, Sameh Khamis, Philip Davidson, Anastasia Tkach, Peter Lincoln, Adarsh Kowdle, Christoph Rhemann, Dan B Goldman (+4 others)
2018 arXiv   pre-print
We call this approach neural (re-)rendering, and our live system "LookinGood".  ...  We take the novel approach to augment such real-time performance capture systems with a deep architecture that takes a rendering from an arbitrary viewpoint, and jointly performs completion, super resolution  ...  LOOKINGOOD WITH NEURAL RE-RENDERING Existing real-time single and multiview performance capture pipelines [Dou et al. 2017 [Dou et al. , 2016 Newcombe et al. 2015; , estimate the geometry and texture  ... 
arXiv:1811.05029v1 fatcat:lxmoanmk75ez7dyg6xwnvcm7wm

LookinGood^π: Real-time Person-independent Neural Re-rendering for High-quality Human Performance Capture [article]

Xiqi Yang, Kewei Yang, Kang Chen, Weidong Zhang, Weiwei Xu
2021 arXiv   pre-print
We propose LookinGood^π, a novel neural re-rendering approach that is aimed to (1) improve the rendering quality of the low-quality reconstructed results from human performance capture system in real-time  ...  Our key idea is to utilize the rendered image of reconstructed geometry as the guidance to assist the prediction of person-specific details from few reference images, thus enhancing the re-rendered result  ...  We summarize our contributions as follows: • We present LookinGood π , a real-time personindependent neural re-rendering approach, to enhance human performance capture, especially with sparse multi-view  ... 
arXiv:2112.08037v1 fatcat:pttx4etzfrfizmjowx3doyy2ji

Few-shot Neural Human Performance Rendering from Sparse RGBD Videos [article]

Anqi Pang, Xin Chen, Haimin Luo, Minye Wu, Jingyi Yu, Lan Xu
2021 arXiv   pre-print
Recent neural rendering approaches for human activities achieve remarkable view synthesis results, but still rely on dense input views or dense training with all the capture frames, leading to deployment  ...  We introduce a two-branch neural blending to combine the neural point render and classical graphics texturing pipeline, which integrates reliable observations over sparse key-frames.  ...  In our neural renderer training, we set k to be 20 for a typical motion sequence with about 500 frames, leading to 4% sparsity of capture view sampling.  ... 
arXiv:2107.06505v1 fatcat:ilzslk4kvvgrri7svtvjhqwfze

Human View Synthesis using a Single Sparse RGB-D Input [article]

Phong Nguyen, Nikolaos Sarafianos, Christoph Lassner, Janne Heikkila, Tony Tung
2021 arXiv   pre-print
Additionally, an enhancer network leverages the overall fidelity, even in occluded areas from the original view, producing crisp renders with fine details.  ...  Aiming to address these limitations, we present a novel view synthesis framework to generate realistic renders from unseen views of any human captured from a single-view sensor with sparse RGB-D, similar  ...  Unsupervised Lookingood: Enhancing performance capture with real-time learning of shape and pose with differentiable point clouds. neural re-rendering. ACM Trans.  ... 
arXiv:2112.13889v2 fatcat:u6e2uuinxra2lnrknjmrcsd6yq

Learning Dynamic View Synthesis With Few RGBD Cameras [article]

Shengze Wang, YoungJoong Kwon, Yuan Shen, Qian Zhang, Andrei State, Jia-Bin Huang, Henry Fuchs
2022 arXiv   pre-print
We generate feature point clouds from RGBD frames and then render them into free-viewpoint videos via a neural renderer.  ...  The dataset consists of 43 multi-view RGBD video sequences of everyday activities, capturing complex interactions between human subjects and their surroundings.  ...  We perform neural rendering with point clouds instead of feature-map-warping.  ... 
arXiv:2204.10477v2 fatcat:tfortvxrwrcthkcrnbxjdslf7y

State of the Art on Neural Rendering [article]

Ayush Tewari, Ohad Fried, Justus Thies, Vincent Sitzmann, Stephen Lombardi, Kalyan Sunkavalli, Ricardo Martin-Brualla, Tomas Simon, Jason Saragih, Matthias Nießner, Rohit Pandey, Sean Fanello (+7 others)
2020 arXiv   pre-print
Starting with an overview of the underlying computer graphics and machine learning concepts, we discuss critical aspects of neural rendering approaches.  ...  Neural rendering is a new and rapidly emerging field that combines generative machine learning techniques with physical knowledge from computer graphics, e.g., by the integration of differentiable rendering  ...  Figure 8 : 8 The LookinGood system [MBPY * 18] uses real-time neural re-rendering to enhance performance capture systems. Images taken from Martin-Brualla et al. [MBPY * 18].  ... 
arXiv:2004.03805v1 fatcat:6qs7ddftkfbotdlfd4ks7llovq

Neural Re-Rendering of Humans from a Single Image [article]

Kripasindhu Sarkar, Dushyant Mehta, Weipeng Xu, Vladislav Golyanik, Christian Theobalt
2021 arXiv   pre-print
The body model with the rendered feature maps is fed through a neural image-translation network that creates the final rendered colour image.  ...  To address these challenges, we propose a new method for neural re-rendering of a human under a novel user-defined pose and viewpoint, given one input image.  ...  Neural Re-Rendering of Humans from a Single Image  ... 
arXiv:2101.04104v1 fatcat:6snujhlbpbgd7nc4gcw62soivy

Neural Body: Implicit Neural Representations with Structured Latent Codes for Novel View Synthesis of Dynamic Humans [article]

Sida Peng, Yuanqing Zhang, Yinghao Xu, Qianqian Wang, Qing Shuai, Hujun Bao, Xiaowei Zhou
2021 arXiv   pre-print
To evaluate our approach, we create a multi-view dataset named ZJU-MoCap that captures performers with complex motions.  ...  This paper addresses the challenge of novel view synthesis for a human performer from a very sparse set of camera views.  ...  Lookingood: Enhancing performance capture with [53] Vincent Sitzmann, Michael Zollhöfer, and Gordon Wet- real-time neural re-rendering. In SIGGRAPH Asia, 2018. 2 zstein.  ... 
arXiv:2012.15838v2 fatcat:cehzk5zuwvespkjwhbgp3tnolu

Few-shot Neural Human Performance Rendering from Sparse RGBD Videos

Anqi Pang, Xin Chen, Haimin Luo, Minye Wu, Jingyi Yu, Lan Xu
2021 Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence   unpublished
Recent neural rendering approaches for human activities achieve remarkable view synthesis results, but still rely on dense input views or dense training with all the capture frames, leading to deployment  ...  We introduce a two-branch neural blending to combine the neural point render and classical graphics texturing pipeline, which integrates reliable observations over sparse key-frames.  ...  In our neural renderer training, we set k to be 20 for a typical motion sequence with about 500 frames, leading to 4% sparsity of capture view sampling.  ... 
doi:10.24963/ijcai.2021/130 fatcat:cxxo23s4knc6ln3tkngnc2ys5u