Filters








11 Hits in 2.8 sec

NeRFactor: Neural Factorization of Shape and Reflectance Under an Unknown Illumination [article]

Xiuming Zhang, Pratul P. Srinivasan, Boyang Deng, Paul Debevec, William T. Freeman, Jonathan T. Barron
2021 arXiv   pre-print
We address the problem of recovering the shape and spatially-varying reflectance of an object from posed multi-view images of the object illuminated by one unknown lighting condition.  ...  The key to our approach, which we call Neural Radiance Factorization (NeRFactor), is to distill the volumetric geometry of a Neural Radiance Field (NeRF) [Mildenhall et al. 2020] representation of the  ...  , Zhoutong Zhang, Xuaner (Cecilia) Zhang, Yun-Tai Tsai, Jiawen Chen, Tzu-Mao Li, Yonglong Tian, and Noah Snavely for fruitful discussions, Noa Glaser and David Salesin for their constructive comments on  ... 
arXiv:2106.01970v1 fatcat:kl53bqcgn5hg3mkd2j3pqyx4ty

Neural Radiance Transfer Fields for Relightable Novel-view Synthesis with Global Illumination [article]

Linjie Lyu, Ayush Tewari, Thomas Leimkuehler, Marc Habermann, Christian Theobalt
2022 arXiv   pre-print
Our method can be solely supervised on a set of real images of the scene under a single unknown lighting condition.  ...  Given a set of images of a scene, the re-rendering of this scene from novel views and lighting conditions is an important and challenging problem in Computer Vision and Graphics.  ...  We would like to thank Xiuming Zhang for his help with the NeRFactor comparisons. Authors from MPII were supported by the ERC Consolidator Grant 4DRepLy (770784).  ... 
arXiv:2207.13607v1 fatcat:bktow7ydnjhftimswdmwwtplna

Extracting Triangular 3D Models, Materials, and Lighting From Images [article]

Jacob Munkberg, Jon Hasselgren, Tianchang Shen, Jun Gao, Wenzheng Chen, Alex Evans, Thomas Müller, Sanja Fidler
2022 arXiv   pre-print
We present an efficient method for joint optimization of topology, materials and lighting from multi-view image observations.  ...  Unlike recent multi-view reconstruction approaches, which typically produce entangled 3D representations encoded in neural networks, we output triangle meshes with spatially-varying materials and environment  ...  Our Approach We present a method for 3D reconstruction supervised by multi-view images of an object illuminated under one unknown environment lighting condition, together with known camera poses and background  ... 
arXiv:2111.12503v4 fatcat:zvcxx7txtnb3raqcdczwmu3n5i

Neural apparent BRDF fields for multiview photometric stereo [article]

Meghna Asthana, William A. P. Smith, Patrik Huber
2022 arXiv   pre-print
The appearance part of our neural representation is decomposed into a neural bidirectional reflectance function (BRDF), learnt as part of the fitting process, and a shadow prediction network (conditioned  ...  We propose to tackle the multiview photometric stereo problem using an extension of Neural Radiance Fields (NeRFs), conditioned on light source direction.  ...  Neural Radiance Factorization (NeRFactor) [40] addresses the issue of recovering the shape and spatial-varying reflectance of an object from multi-view images of an object illuminated by unknown lighting  ... 
arXiv:2207.06793v1 fatcat:63be62lpo5gvlkqrw53tm5saym

Relighting4D: Neural Relightable Human from Videos [article]

Zhaoxi Chen, Ziwei Liu
2022 arXiv   pre-print
In this work, we propose a principled framework, Relighting4D, that enables free-viewpoints relighting from only human videos under unknown illuminations.  ...  Our key insight is that the space-time varying geometry and reflectance of the human body can be decomposed as a set of neural fields of normal, occlusion, diffuse, and specular maps.  ...  , and under the RIE2020 Industry Alignment Fund -Industry Collaboration Projects (IAF-ICP) Funding Initiative, as well as cash and in-kind contribution from the industry partner(s).  ... 
arXiv:2207.07104v1 fatcat:zm23ewo3o5di7ixjl5vnsql2pm

Edge-preserving Near-light Photometric Stereo with Neural Surfaces [article]

Heng Guo, Hiroaki Santo, Boxin Shi, Yasuyuki Matsushita
2022 arXiv   pre-print
Experiments on both synthetic and real-world scenes demonstrate the effectiveness of our method for detailed shape recovery with edge preservation.  ...  Unlike previous methods that rely on finite differentiation for approximating depth partial derivatives and surface normals, we introduce an analytically differentiable neural surface in near-light photometric  ...  .: Nerfactor: Neural factorization of shape and reflectance under an unknown illumination.  ... 
arXiv:2207.04622v1 fatcat:tajl3tftxnd5fkeuiemoxnfwha

Advances in Neural Rendering [article]

Ayush Tewari, Justus Thies, Ben Mildenhall, Pratul Srinivasan, Edgar Tretschk, Yifan Wang, Christoph Lassner, Vincent Sitzmann, Ricardo Martin-Brualla, Stephen Lombardi, Tomas Simon, Christian Theobalt (+5 others)
2022 arXiv   pre-print
Neural rendering is a leap forward towards the goal of synthesizing photo-realistic image and video content.  ...  Synthesizing photo-realistic images and videos is at the heart of computer graphics and has been the focus of decades of research.  ...  Figure 13 : 13 Figure 13: NeRFactor [ZSD * 21] decomposes a scene captured under an unknown illumination into 3D neural fields of surface normals, albedo, BRDF and shading.  ... 
arXiv:2111.05849v2 fatcat:nbvkfg2bjvgqdopdqwl33rt4ii

SNeS: Learning Probably Symmetric Neural Surfaces from Incomplete Data [article]

Eldar Insafutdinov, Dylan Campbell, João F. Henriques, Andrea Vedaldi
2022 arXiv   pre-print
We build on the strengths of recent advances in neural reconstruction and rendering such as Neural Radiance Fields (NeRF).  ...  To address this, we apply a soft symmetry constraint to the 3D geometry and material properties, having factored appearance into lighting, albedo colour and reflectivity.  ...  Of particular relevance to this work is the approach of Wu et al. [36, 35, 37] , who use reflective and rotational symmetries to recover shape, material properties and lighting from single images.  ... 
arXiv:2206.06340v1 fatcat:z36e4ay7mnarfggcxmawhgxu6u

Neural Fields in Visual Computing and Beyond [article]

Yiheng Xie, Towaki Takikawa, Shunsuke Saito, Or Litany, Shiqin Yan, Numair Khan, Federico Tombari, James Tompkin, Vincent Sitzmann, Srinath Sridhar
2022 arXiv   pre-print
These methods, which we call neural fields, have seen successful application in the synthesis of 3D shapes and image, animation of human bodies, 3D reconstruction, and pose estimation.  ...  In this report, we address this limitation by providing context, mathematical grounding, and an extensive review of literature on neural fields. This report covers research along two dimensions.  ...  Medial Fields [RLS * 21] represent the local thickness of the geometry which can be derived from the medial axis.  ... 
arXiv:2111.11426v4 fatcat:yteqzbu6gvgdzobnfzuqohix2e

CoNeRF: Controllable Neural Radiance Fields [article]

Kacper Kania, Kwang Moo Yi, Marek Kowalski, Tomasz Trzciński, Andrea Tagliasacchi
2021 arXiv   pre-print
We extend neural 3D representations to allow for intuitive and interpretable user control beyond novel view rendering (i.e. camera control).  ...  Overall, we demonstrate, to the best of our knowledge, for the first time novel view and novel attribute re-rendering of scenes from a single video.  ...  NeRFac- Zhihui Li, Xiaojiang Chen, and Xin Wang. A Survey of Deep tor: Neural Factorization of Shape and Reflectance Under an Active Learning. ACM Comput.  ... 
arXiv:2112.01983v2 fatcat:vzuy7zyhgfgtvfgkviqpw6zi6e

NeROIC: Neural Rendering of Objects from Online Image Collections [article]

Zhengfei Kuang, Kyle Olszewski, Menglei Chai, Zeng Huang, Panos Achlioptas, Sergey Tulyakov
2022
, illumination, and backgrounds.  ...  The union of these components results in a highly modular and efficient object acquisition framework.  ...  On the other hand, many works including Neural Reflectance Field [2] , NeRFactor [49] , NeRV [37] , and PhySG [47] combine NeRF with physical-based rendering techniques, and estimate various material  ... 
doi:10.48550/arxiv.2201.02533 fatcat:ymyxxmpk2vfppfbzcuvhaozmda