Filters








37 Hits in 1.7 sec

Neural Rerendering in the Wild [article]

Moustafa Meshry, Dan B Goldman, Sameh Khamis, Hugues Hoppe, Rohit Pandey, Noah Snavely, Ricardo Martin-Brualla
<span title="2019-04-08">2019</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
For each photo, we render the scene points into a deep framebuffer, and train a neural network to learn the mapping of these initial renderings to the actual photos.  ...  This rerendering network also takes as input a latent appearance vector and a semantic mask indicating the location of transient objects like pedestrians.  ...  Acknowledgements: We thank Gregory Blascovich for his help in conducting the user study, and Johannes Schönberger and True Price for their help generating datasets.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1904.04290v1">arXiv:1904.04290v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/ozl2dzw52jabbeupetot62f5me">fatcat:ozl2dzw52jabbeupetot62f5me</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200928035245/https://arxiv.org/pdf/1904.04290v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/c8/62/c86219691bf85229bc0b018581ecf0acded92356.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1904.04290v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Neural Rerendering in the Wild

Moustafa Meshry, Dan B. Goldman, Sameh Khamis, Hugues Hoppe, Rohit Pandey, Noah Snavely, Ricardo Martin-Brualla
<span title="">2019</span> <i title="IEEE"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/ilwxppn4d5hizekyd3ndvy2mii" style="color: black;">2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)</a> </i> &nbsp;
For each photo, we render the scene points into a deep framebuffer, and train a neural network to learn the mapping of these initial renderings to the actual photos.  ...  This rerendering network also takes as input a latent appearance vector and a semantic mask indicating the location of transient objects like pedestrians.  ...  In our paper, we demonstrate an approach for training a neural rerendering framework in the wild, i.e., with uncontrolled data instead of captures under constant lighting conditions.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/cvpr.2019.00704">doi:10.1109/cvpr.2019.00704</a> <a target="_blank" rel="external noopener" href="https://dblp.org/rec/conf/cvpr/MeshryGKHPSM19.html">dblp:conf/cvpr/MeshryGKHPSM19</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/oim43sxz2fezlicvdptrfmn4ay">fatcat:oim43sxz2fezlicvdptrfmn4ay</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20190711110701/http://openaccess.thecvf.com:80/content_CVPR_2019/papers/Meshry_Neural_Rerendering_in_the_Wild_CVPR_2019_paper.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/c8/cc/c8ccefc4e60dea2755cd23f709fc684395391980.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/cvpr.2019.00704"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> ieee.com </button> </a>

Neural Point-Based Graphics [article]

Kara-Ali Aliev, Artem Sevastopolsky, Maria Kolos, Dmitry Ulyanov, Victor Lempitsky
<span title="2020-04-05">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
The approach uses a raw point cloud as the geometric representation of a scene, and augments each point with a learnable neural descriptor that encodes local geometry and appearance.  ...  In particular, compelling results are obtained for scene scanned using hand-held commodity RGB-D sensors as well as standard RGB cameras even in the presence of objects that are challenging for standard  ...  -Neural Rerendering in the Wild.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1906.08240v3">arXiv:1906.08240v3</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/wzts45a3nbdw3h2cwbdmgggtkm">fatcat:wzts45a3nbdw3h2cwbdmgggtkm</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200408000920/https://arxiv.org/pdf/1906.08240v3.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1906.08240v3" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

NeRF in the Wild: Neural Radiance Fields for Unconstrained Photo Collections [article]

Ricardo Martin-Brualla, Noha Radwan, Mehdi S. M. Sajjadi, Jonathan T. Barron, Alexey Dosovitskiy, Daniel Duckworth
<span title="2021-01-06">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
We present a learning-based method for synthesizing novel views of complex scenes using only unstructured collections of in-the-wild photographs.  ...  We build on Neural Radiance Fields (NeRF), which uses the weights of a multilayer perceptron to model the density and color of a scene as a function of 3D coordinates.  ...  Most similar in application to our work is Neural Rerendering in the Wild (NRW) [22] which synthesizes realistic novel views of tourist sites from point cloud renderings by learning a neural re-rendering  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2008.02268v3">arXiv:2008.02268v3</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/z5gq57fovvekteczkegiwrrimq">fatcat:z5gq57fovvekteczkegiwrrimq</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210114053613/https://arxiv.org/pdf/2008.02268v3.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/f0/61/f0613474eeba74b3bd0dce4de160570ebda24b42.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2008.02268v3" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

State of the Art on Neural Rendering [article]

Ayush Tewari, Ohad Fried, Justus Thies, Vincent Sitzmann, Stephen Lombardi, Kalyan Sunkavalli, Ricardo Martin-Brualla, Tomas Simon, Jason Saragih, Matthias Nießner, Rohit Pandey, Sean Fanello (+7 others)
<span title="2020-04-08">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
With a plethora of applications in computer graphics and vision, neural rendering is poised to become a new area in the graphics community, yet no survey of this emerging field exists.  ...  This state-of-the-art report summarizes the recent trends and applications of neural rendering.  ...  Neural Rerendering in the Wild [MGK * 19] uses neural rerendering to synthesize realistic views of tourist landmarks under various lighting conditions, see Figure 4 .  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2004.03805v1">arXiv:2004.03805v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/6qs7ddftkfbotdlfd4ks7llovq">fatcat:6qs7ddftkfbotdlfd4ks7llovq</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200410225440/https://arxiv.org/pdf/2004.03805v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2004.03805v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

StyleRig: Rigging StyleGAN for 3D Control over Portrait Images [article]

Ayush Tewari, Mohamed Elgharib, Gaurav Bharaj, Florian Bernard, Hans-Peter Seidel, Patrick Pérez, Michael Zollhöfer, Christian Theobalt
<span title="2020-06-13">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
A new rigging network, RigNet is trained between the 3DMM's semantic parameters and StyleGAN's input. The network is trained in a self-supervised manner, without the need for manual annotations.  ...  StyleGAN generates photorealistic portrait images of faces with eyes, teeth, hair and context (neck, shoulders, background), but lacks a rig-like control over semantic face parameters that are interpretable in  ...  This work was supported by the ERC Consolidator Grant 4DReply (770784), the Max Planck Center for Visual Computing and Communications (MPC-VCC), and by Technicolor.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2004.00121v2">arXiv:2004.00121v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/4bpgows3czdjvkbeu2l6autmou">fatcat:4bpgows3czdjvkbeu2l6autmou</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200623140257/https://arxiv.org/pdf/2004.00121v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2004.00121v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

FaceDet3D: Facial Expressions with 3D Geometric Detail Prediction [article]

ShahRukh Athar, Albert Pumarola, Francesc Moreno-Noguer, Dimitris Samaras
<span title="2020-12-23">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
The facial details are represented as a vertex displacement map and used then by a Neural Renderer to photo-realistically render novel images of any single image in any desired expression and view.  ...  Morphable Models (3DMMs) of the human face fail to capture such fine details in their PCA-based representations and consequently cannot generate such details when used to edit expressions.  ...  Acknowledgements This work is supported in part by a Google Daydream Research award, by the Spanish government with projects HuMoUR TIN2017-90086-R and María de Maeztu Seal of Excellence MDM-2016-0656,  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2012.07999v3">arXiv:2012.07999v3</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/byexpip5ijhubhdcdn3jlziq4y">fatcat:byexpip5ijhubhdcdn3jlziq4y</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20201228102413/https://arxiv.org/pdf/2012.07999v3.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/7e/28/7e28c32327c85fabff3923c934dda9cf146b7c85.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2012.07999v3" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Crowdsampling the Plenoptic Function [article]

Zhengqi Li, Wenqi Xian, Abe Davis, Noah Snavely
<span title="2020-07-30">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
neural rendering.  ...  in both space and across changes in lighting.  ...  This research was supported in part by the generosity of Eric and Wendy Schmidt by recommendation of the Schmidt Futures program.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2007.15194v1">arXiv:2007.15194v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/onmrsqliw5hbhcg4fqa2rhmbra">fatcat:onmrsqliw5hbhcg4fqa2rhmbra</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200902121224/https://arxiv.org/pdf/2007.15194v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/37/d2/37d29d649e47065e5b3f6e5a5151dd077fe85cd0.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2007.15194v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

NeReF: Neural Refractive Field for Fluid Surface Reconstruction and Implicit Representation [article]

Ziyu Wang, Wei Yang, Junming Cao, Lan Xu, Junqing Yu, Jingyi Yu
<span title="2022-03-08">2022</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
We present a novel neural refractive field(NeReF) to recover wavefront of transparent fluids by simultaneously estimating the surface position and normal of the fluid front.  ...  Existing neural reconstruction schemes such as Neural Radiance Field (NeRF) are largely focused on modeling opaque objects.  ...  Neural Radiance Field The remarkable work of Neural Radiance Field is a milestone in novel view synthesis area.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2203.04130v1">arXiv:2203.04130v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/pqnngji3jzen3mukmaxic7zlri">fatcat:pqnngji3jzen3mukmaxic7zlri</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20220311034203/https://arxiv.org/pdf/2203.04130v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/fd/2c/fd2c0e45a95933cacc2a69c79ec1f74553d0cb39.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2203.04130v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

A Review of Deep Learning-based Approaches for Deepfake Content Detection [article]

Leandro A. Passos, Danilo Jodas, Kelton A. P. da Costa, Luis A. Souza Júnior, Danilo Colombo, João Paulo Papa
<span title="2022-02-12">2022</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Detection of counterfeit content has raised attention in the last few years for the advances in deepfake generation.  ...  The rapid growth of machine learning techniques, particularly deep learning, can predict fake content in several application domains, including fake image and video manipulation.  ...  Fake Face in the Wild The Fake Face in the Wild (FFW) dataset [26] tries to simulate the performance of fake face detection methods in the wild.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2202.06095v1">arXiv:2202.06095v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/27ogp2kj4jayvmoe5xbhhi33li">fatcat:27ogp2kj4jayvmoe5xbhhi33li</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20220216223501/https://arxiv.org/pdf/2202.06095v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/4f/3f/4f3f9a787bf0f5f650e75fefd9f9a7025e280b4d.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2202.06095v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Neural Radiance Fields for Outdoor Scene Relighting [article]

Viktor Rudnev and Mohamed Elgharib and William Smith and Lingjie Liu and Vladislav Golyanik and Christian Theobalt
<span title="2021-12-09">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
., the first approach for outdoor scene relighting based on neural radiance fields.  ...  In contrast to the prior art, our technique allows simultaneous editing of both scene illumination and camera viewpoint using only a collection of outdoor photos shot in uncontrolled settings.  ...  Neural rerendering in the wild. In Computer Jonathan Taylor, Paul Debevec, and Shahram Izadi.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2112.05140v1">arXiv:2112.05140v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/kgsqa4x73rcbziejajqkzrajqe">fatcat:kgsqa4x73rcbziejajqkzrajqe</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20211211102333/https://arxiv.org/pdf/2112.05140v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/f6/c4/f6c44dec3edc03fa3813745847e4150c72674db7.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2112.05140v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

TRANSPR: Transparency Ray-Accumulating Neural 3D Scene Point Renderer [article]

Maria Kolos, Artem Sevastopolsky, Victor Lempitsky
<span title="2020-09-06">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Additionally, a learnable transparency value is introduced in our approach for each point. Our neural rendering procedure consists of two steps.  ...  This is followed by the neural rendering step that "translates" the rasterized image into an RGB output using a learnable convolutional network.  ...  The most related to ours are works that use point clouds as geometric representations such as neural splatting [3] , differential surface splatting [20] , neural rerendering in the wild [10] , and neural  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2009.02819v1">arXiv:2009.02819v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/6tnksnrg3neincyinznt47yrki">fatcat:6tnksnrg3neincyinznt47yrki</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200918051047/https://arxiv.org/pdf/2009.02819v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2009.02819v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Neural Body: Implicit Neural Representations with Structured Latent Codes for Novel View Synthesis of Dynamic Humans [article]

Sida Peng, Yuanqing Zhang, Yinghao Xu, Qianqian Wang, Qing Shuai, Hujun Bao, Xiaowei Zhou
<span title="2021-03-29">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
To this end, we propose Neural Body, a new human body representation which assumes that the learned neural representations at different frames share the same set of latent codes anchored to a deformable  ...  Experiments on ZJU-MoCap show that our approach outperforms prior works by a large margin in terms of novel view synthesis quality.  ...  In ECCV, 2020. 2 Brualla. Neural rerendering in the wild. In CVPR, 2019. [56] Justus Thies, Michael Zollhöfer, and Matthias Nießner.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2012.15838v2">arXiv:2012.15838v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/cehzk5zuwvespkjwhbgp3tnolu">fatcat:cehzk5zuwvespkjwhbgp3tnolu</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210416130231/https://arxiv.org/pdf/2012.15838v1.pdf" title="fulltext PDF download [not primary version]" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <span style="color: #f43e3e;">&#10033;</span> <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/af/8f/af8faec7c0b8f4b2a28d42a86e0e7d499016c560.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2012.15838v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Neural Voice Puppetry: Audio-driven Facial Reenactment [article]

Justus Thies, Mohamed Elgharib, Ayush Tewari, Christian Theobalt, Matthias Nießner
<span title="2020-07-29">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Given an audio sequence of a source person or digital assistant, we generate a photo-realistic output video of a target person that is in sync with the audio of the source input.  ...  We demonstrate the capabilities of our method in a series of audio- and text-based puppetry examples, including comparisons to state-of-the-art techniques and a user study.  ...  A neural texture in conjunction with a novel neural rendering network is used to store and to rerender the appearance of the face of an individual person.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1912.05566v2">arXiv:1912.05566v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/sazmvejurrbu7kadsdjgejc2am">fatcat:sazmvejurrbu7kadsdjgejc2am</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200929180657/https://arxiv.org/pdf/1912.05566v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/4a/7e/4a7e446cf43ad5d7e9cd1acfa519a60a9068a322.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1912.05566v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Human View Synthesis using a Single Sparse RGB-D Input [article]

Phong Nguyen, Nikolaos Sarafianos, Christoph Lassner, Janne Heikkila, Tony Tung
<span title="2021-12-30">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Additionally, an enhancer network leverages the overall fidelity, even in occluded areas from the original view, producing crisp renders with fine details.  ...  We propose an architecture to learn dense features in novel views obtained by sphere-based neural rendering, and create complete renders using a global context inpainting model.  ...  Neural rerendering in the wild. In Proceedings of Koray Kavukcuoglu. Spatial transformer networks.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2112.13889v2">arXiv:2112.13889v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/u6e2uuinxra2lnrknjmrcsd6yq">fatcat:u6e2uuinxra2lnrknjmrcsd6yq</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20220106022621/https://arxiv.org/pdf/2112.13889v1.pdf" title="fulltext PDF download [not primary version]" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <span style="color: #f43e3e;">&#10033;</span> <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/85/a1/85a11b5c1b13c3b1d859abb0464afe1b9ea17f9b.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2112.13889v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>
&laquo; Previous Showing results 1 &mdash; 15 out of 37 results