Filters








5,101 Hits in 3.7 sec

Robust Pose Transfer with Dynamic Details using Neural Video Rendering [article]

Yang-tian Sun, Hao-zhi Huang, Xuan Wang, Yu-kun Lai, Wei Liu, Lin Gao
2021 arXiv   pre-print
Through extensive comparisons, we demonstrate that our neural human video renderer is capable of achieving both clearer dynamic details and more robust performance even on accessible short videos with  ...  Pose transfer of human videos aims to generate a high fidelity video of a target person imitating actions of a source person.  ...  Here, we summarize the technical contributions as follows: • A novel end-to-end neural rendering framework for human video generation with dynamic details using accessible monocular video as training data  ... 
arXiv:2106.14132v2 fatcat:dslhnmaborfstasn2qh6lwejly

Video-driven Neural Physically-based Facial Asset for Production [article]

Longwen Zhang, Chuxiao Zeng, Qixuan Zhang, Hongyang Lin, Ruixiang Cao, Wei Yang, Lan Xu, Jingyi Yu
2022 arXiv   pre-print
In addition, our neural asset along with fast adaptation schemes can also be deployed to handle in-the-wild videos.  ...  In this paper, we present a new learning-based, video-driven approach for generating dynamic facial geometries with high-quality physically-based assets.  ...  Our method achieves detailed video-driven results from different identities with dynamic textures, which leads to photo-realistic rendering. Fig. 17 . 17 Fig. 17.  ... 
arXiv:2202.05592v3 fatcat:tfbmwzburfh7hicvdgwzd4erga

Neural Human Video Rendering by Learning Dynamic Textures and Rendering-to-Video Translation [article]

Lingjie Liu, Weipeng Xu, Marc Habermann, Michael Zollhoefer, Florian Bernard, Hyeongwoo Kim, Wenping Wang, Christian Theobalt
2021 arXiv   pre-print
Synthesizing realistic videos of humans using neural networks has been a popular alternative to the conventional graphics-based rendering pipeline due to its high efficiency.  ...  Given the pose information, the first CNN predicts a dynamic texture map that contains time-coherent high-frequency details, and the second CNN conditions the generation of the final video on the temporally  ...  These benefits come from a well-designed three-stage pipeline that first generates a dynamic texture with time-coherent high-frequency details and then renders the mesh with the dynamic texture, which  ... 
arXiv:2001.04947v3 fatcat:ppii2ilexze7nkejshrohlky4u

A New Dimension in Testimony: Relighting Video with Reflectance Field Exemplars [article]

Loc Huynh, Bipin Kishore, Paul Debevec
2021 arXiv   pre-print
We also use a differentiable renderer to provide feedback for the network by matching the relit images with the input video frames.  ...  This semi-supervised training scheme allows the neural network to handle unseen poses in the dataset as well as compensate for the lighting estimation error.  ...  But instead of recording OLAT's for every moment of the video, our neural network infers OLATs for each video frame based on exemplars from static poses, enabling dynamic performance relighting.  ... 
arXiv:2104.02773v1 fatcat:4pynaycd5rb4zgkbt3i4sq357q

Animatable Neural Radiance Fields from Monocular RGB Videos [article]

Jianchuan Chen, Ying Zhang, Di Kang, Xuefei Zhe, Linchao Bao, Xu Jia, Huchuan Lu
2021 arXiv   pre-print
We present animatable neural radiance fields (animatable NeRF) for detailed human avatar creation from monocular videos.  ...  Our approach extends neural radiance fields (NeRF) to the dynamic scenes with human movements via introducing explicit pose-guided deformation while learning the scene representation network.  ...  We can use the volumetric rendering (Section 3.3) to render our neural radiance field.  ... 
arXiv:2106.13629v2 fatcat:tg3f5jyyhbbcjhpisay5pe6b5e

Head2Head: Video-based Neural Head Synthesis [article]

Mohammad Rami Koujan, Michail Christos Doukas, Anastasios Roussos, Stefanos Zafeiriou
2020 arXiv   pre-print
We demonstrate that the proposed method can transfer facial expressions, pose and gaze of a source actor to a target video in a photo-realistic fashion more accurately than state-of-the-art methods.  ...  In particular, contrary to the model-based approaches or recent frame-based methods that use Deep Convolutional Neural Networks (DCNNs) to generate individual frames, we propose a novel method that (a)  ...  In this method, texture features are learned from the target video and translated to RGB images with a neural renderer.  ... 
arXiv:2005.10954v1 fatcat:kw43v65lindf3oapvw7rnjxgbu

Neural Re-Rendering of Humans from a Single Image [article]

Kripasindhu Sarkar, Dushyant Mehta, Weipeng Xu, Vladislav Golyanik, Christian Theobalt
2021 arXiv   pre-print
The body model with the rendered feature maps is fed through a neural image-translation network that creates the final rendered colour image.  ...  To address these challenges, we propose a new method for neural re-rendering of a human under a novel user-defined pose and viewpoint, given one input image.  ...  Neural Re-Rendering of Humans from a Single Image  ... 
arXiv:2101.04104v1 fatcat:6snujhlbpbgd7nc4gcw62soivy

Real-Time Neural Character Rendering with Pose-Guided Multiplane Images [article]

Hao Ouyang, Bo Zhang, Pan Zhang, Hao Yang, Jiaolong Yang, Dong Chen, Qifeng Chen, Fang Wen
2022 arXiv   pre-print
We propose pose-guided multiplane image (MPI) synthesis which can render an animatable character in real scenes with photorealistic quality.  ...  Our method generalizes the image-to-image translation paradigm, which translates the human pose to a 3D scene representation -- MPIs that can be rendered in free viewpoints, using the multi-views captures  ...  Neural scene representation Instead of rendering with a black-box image translation process, recent works turn to using neural networks to model some intrinsic aspects of the scene followed by a physics-based  ... 
arXiv:2204.11820v1 fatcat:svrgg7sr2vgdfpua74rgo6cqge

Artemis: Articulated Neural Pets with Appearance and Motion synthesis [article]

Haimin Luo, Teng Xu, Yuheng Jiang, Chenglin Zhou, Qiwei Qiu, Yingliang Zhang, Wei Yang, Lan Xu, Jingyi Yu
2022 arXiv   pre-print
In this paper, we present ARTEMIS, a novel neural modeling and rendering pipeline for generating ARTiculated neural pets with appEarance and Motion synthesIS.  ...  Finally, we propose a novel shading network to generate high-fidelity details of appearance and opacity under novel poses from appearance and density feature maps.  ...  Besides, we thank Zhenxiao Yu from Shang-haiTech University for producing a supplementary video.  ... 
arXiv:2202.05628v2 fatcat:v7f6tg64fvfohmtl7riiecn5fu

ShineOn: Illuminating Design Choices for Practical Video-based Virtual Clothing Try-on [article]

Gaurav Kuppa, Andrew Jong, Vera Liu, Ziwei Liu, Teng-Sheng Moh
2021 arXiv   pre-print
Virtual try-on has garnered interest as a neural rendering benchmark task to evaluate complex object transfer and scene composition.  ...  Specifically, we investigate the effect of different pose annotations, self-attention layer placement, and activation functions on the quantitative and qualitative performance of video virtual try-on.  ...  detail, exhibit smooth temporal dynamics, establish temporal consistency, and blend well with the scene's lighting.  ... 
arXiv:2012.10495v2 fatcat:agqqtz4xercwld4ou7ed4ksfdy

Layered Neural Rendering for Retiming People in Video [article]

Erika Lu, Forrester Cole, Tali Dekel, Weidi Xie, Andrew Zisserman, David Salesin, William T. Freeman, Michael Rubinstein
2021 arXiv   pre-print
The layers can be individually retimed and recombined into a new video, allowing us to achieve realistic, high-quality renderings of retiming effects for real-world videos depicting complex actions and  ...  A key property of our model is that it not only disentangles the direct motions of each person in the input video, but also correlates each person automatically with the scene changes they generate --  ...  ACKNOWLEDGMENTS We thank the friends and family that appeared in our videos. The original Ballroom video belongs to Desert Classic Dance.  ... 
arXiv:2009.07833v2 fatcat:zgg5nx6ykrhmzjta5a27rpvvgy

Everybody's Talkin': Let Me Talk as You Want [article]

Linsen Song, Wayne Wu, Chen Qian, Ran He, Chen Change Loy
2020 arXiv   pre-print
Finally, we introduce a novel video rendering network and a dynamic programming method to construct a temporally coherent and photo-realistic video.  ...  The audio-translated expression parameters are then used to synthesize a photo-realistic human subject in each video frame, with the movement of the mouth regions precisely mapped to the source audio.  ...  Tongue: In our method, our Neural Video Rendering Network produces lip fiducials and the teeth proxy adds the teeth high-frequency details.  ... 
arXiv:2001.05201v1 fatcat:wes6abhwinghfohufyh46dacwy

Human View Synthesis using a Single Sparse RGB-D Input [article]

Phong Nguyen, Nikolaos Sarafianos, Christoph Lassner, Janne Heikkila, Tony Tung
2021 arXiv   pre-print
We propose an architecture to learn dense features in novel views obtained by sphere-based neural rendering, and create complete renders using a global context inpainting model.  ...  Additionally, an enhancer network leverages the overall fidelity, even in occluded areas from the original view, producing crisp renders with fine details.  ...  Instead, we use the dynamic per- a dynamic Neural Radiance Field.  ... 
arXiv:2112.13889v2 fatcat:u6e2uuinxra2lnrknjmrcsd6yq

HVTR: Hybrid Volumetric-Textural Rendering for Human Avatars [article]

Tao Hu, Tao Yu, Zerong Zheng, He Zhang, Yebin Liu, Matthias Zwicker
2021 arXiv   pre-print
While this allows us to represent 3D geometry with changing topology, volumetric rendering is computationally heavy.  ...  Hence we employ only a rough volumetric representation using a pose-conditioned downsampled neural radiance field (PD-NeRF), which we can render efficiently at low resolutions.  ...  Neural rendering and reenactment of Dense pose transfer. European Conference on Computer Vi- human actor videos.  ... 
arXiv:2112.10203v1 fatcat:fypyninknbfopb73m343dmeoua

Head2Head++: Deep Facial Attributes Re-Targeting [article]

Michail Christos Doukas, Mohammad Rami Koujan, Viktoriia Sharmanska, Anastasios Roussos
2020 arXiv   pre-print
We manage to capture the complex non-rigid facial motion from the driving monocular performances and synthesise temporally consistent videos, with the aid of a sequential Generator and an ad-hoc Dynamics  ...  Our method is different to purely 3D model-based approaches, or recent image-based methods that use Deep Convolutional Neural Networks (DCNNs) to generate individual frames.  ...  , neural renderer.  ... 
arXiv:2006.10199v1 fatcat:ylapnwbkzjes5gejb4zmnou6ym
« Previous Showing results 1 — 15 out of 5,101 results