Filters








21 Hits in 3.7 sec

Single-image SVBRDF capture with a rendering-aware deep network

Valentin Deschaintre, Miika Aittala, Fredo Durand, George Drettakis, Adrien Bousseau
2018 ACM Transactions on Graphics  
We tackle lightweight appearance capture by training a deep neural network to automatically extract and make sense of these visual cues.  ...  Yet, recovering spatially-varying bi-directional reflectance distribution functions (SVBRDFs) from a single image based on such cues has challenged researchers in computer graphics for decades.  ...  Riviere for help with evaluation.  ... 
doi:10.1145/3197517.3201378 fatcat:zgnw562v4faxdo6rv74a63rf7y

SVBRDF Recovery From a Single Image With Highlights using a Pretrained Generative Adversarial Network [article]

Tao Wen and Beibei Wang and Lei Zhang and Jie Guo and Nicolas Holzschuch
2021 arXiv   pre-print
In this paper, we use an unsupervised generative adversarial neural network (GAN) to recover SVBRDFs maps with a single image as input.  ...  We aim to recover SVBRDFs from a single image, without any datasets. A single image contains incomplete information about the SVBRDF, making the reconstruction task highly ill-posed.  ...  [BJK * 20] designed a cascaded network for shape, illumination and SVBRDF estimation, using two images captured by a cellphone with flash both on and off.  ... 
arXiv:2111.00943v1 fatcat:bwrf2azv25e35g7z6qxrv2yvhy

Diffuse Map Guiding Unsupervised Generative Adversarial Network for SVBRDF Estimation [article]

Zhiyao Luo, Hongnan Chen
2022 arXiv   pre-print
This method can predict plausible SVBRDF maps with global features using only a few pictures taken by the mobile phone.  ...  Traditionally, materials in computer graphics are mapped by an artist, then mapped onto a geometric model by coordinate transformation, and finally rendered with a rendering engine to get realistic materials  ...  Recently, deep learning-based methods have made significant progress in SVBRDF estimation for a single image.  ... 
arXiv:2205.11951v2 fatcat:s4orfhhxrzagznxyxxct46vgra

Generative Modelling of BRDF Textures from Flash Images [article]

Philipp Henzler, Valentin Deschaintre, Niloy J. Mitra, Tobias Ritschel
2021 arXiv   pre-print
using a convolutional neural network (CNN).  ...  A user study compares our approach favorably to previous work, even those with access to BRDF supervision.  ...  We thank the authors of Aittala et al. [2016] , Gao et al. [2019] , Guo et al. [2020b] , for providing their implementations and helping with the comparisons.  ... 
arXiv:2102.11861v2 fatcat:6me7cyt4jbhivbvpou3irpfuwi

An Inverse Procedural Modeling Pipeline for SVBRDF Maps [article]

Yiwei Hu, Chengan He, Valentin Deschaintre, Julie Dorsey, Holly Rushmeier
2021 arXiv   pre-print
Each decomposed sub-material is proceduralized by a novel multi-layer noise model to capture local variations at different scales.  ...  Given Spatially-Varying Bidirectional Reflectance Distribution Functions (SVBRDFs) represented as sets of pixel maps, our pipeline decomposes them into a tree of sub-materials whose spatial distributions  ...  Abhijeet Ghosh with his EPSRC Early Career Fellowship (EP/N006259/1)  ... 
arXiv:2109.06395v2 fatcat:wf76533irncjvpc466lvgfqtka

Reflectance modeling by neural texture synthesis

Miika Aittala, Timo Aila, Jaakko Lehtinen
2016 ACM Transactions on Graphics  
To date, single-image capture of materials with rich spatial variation has remained elusive.  ...  Conclusion We have described a method for capturing a rich appearance model of a textured surface based on a single input image.  ...  The leftmost column shows the input flash image, which was rendered using the ground truth SVBRDF shown in the middle column.  ... 
doi:10.1145/2897824.2925917 fatcat:ir6jtl4ypvahnkodmdkwd5qlla

Neural-PIL: Neural Pre-Integrated Lighting for Reflectance Decomposition [article]

Mark Boss, Varun Jampani, Raphael Braun, Ce Liu, Jonathan T. Barron, Hendrik P.A. Lensch
2021 arXiv   pre-print
Our key technique is a novel illumination integration network called Neural-PIL that replaces a costly illumination integral operation in the rendering with a simple network query.  ...  We propose a novel reflectance decomposition network that can estimate shape, BRDF, and per-image illumination given a set of object images captured under varying illumination.  ...  We not only learn a deep prior but a rendering aware network which is capable of integrating the environment illumination for a specific surface roughness enabling rendering the entire hemisphere of incoming  ... 
arXiv:2110.14373v1 fatcat:vg5s62ydwnffzivbyrqdu2g5tu

PhotoScene: Photorealistic Material and Lighting Transfer for Indoor Scenes [article]

Yu-Ying Yeh, Zhengqin Li, Yannick Hold-Geoffroy, Rui Zhu, Zexiang Xu, Miloš Hašan, Kalyan Sunkavalli, Manmohan Chandraker
2022 arXiv   pre-print
In this work, we go beyond this to propose PhotoScene, a framework that takes input image(s) of a scene along with approximately aligned CAD geometry (either reconstructed automatically or manually specified  ...  We optimize the parameters of these graphs and their texture scale and rotation, as well as the scene lighting to best match the input image via a differentiable rendering layer.  ...  Acknowledgments: We thank NSF awards CAREER 1751365, IIS 2110409 and CHASE-CI, generous support by Adobe, as well as gifts from Qualcomm and a Google Research Award.  ... 
arXiv:2207.00757v1 fatcat:xqreu3cg2nbytnqwo6mxp5bbhu

Deep Reflectance Volumes: Relightable Reconstructions from Multi-View Photometric Images [article]

Sai Bi, Zexiang Xu, Kalyan Sunkavalli, Miloš Hašan, Yannick Hold-Geoffroy, David Kriegman, Ravi Ramamoorthi
2020 arXiv   pre-print
We present a deep learning approach to reconstruct scene appearance from unstructured images captured under collocated point lighting.  ...  This allows us to optimize the scene volumes to minimize the error between their rendered images and the captured images.  ...  We thank Giljoo Nam for help with the comparisons. This work was supported in part by ONR grants N000141712687, N000141912293, N000142012529, NSF grant 1617234, Adobe, the Ronald L.  ... 
arXiv:2007.09892v1 fatcat:6h642u5ktnel5nzp7gkozgvyae

A Survey on Intrinsic Images: Delving Deep Into Lambert and Beyond [article]

Elena Garces, Carlos Rodriguez-Pardo, Dan Casas, Jorge Lopez-Moreno
2021 arXiv   pre-print
Intrinsic imaging or intrinsic image decomposition has traditionally been described as the problem of decomposing an image into two layers: a reflectance, the albedo invariant color of the material; and  ...  Although the Lambertian assumption is still a foundational basis for many methods, we show that there is increasing awareness on the potential of more sophisticated physically-principled components of  ...  Acknowledgments Elena Garces was partially supported by a Torres Quevedo Fellowship (PTQ2018-009868).  ... 
arXiv:2112.03842v1 fatcat:ciwwxoodq5fl7ma4jqjrgp7k5m

State of the Art on Neural Rendering [article]

Ayush Tewari, Ohad Fried, Justus Thies, Vincent Sitzmann, Stephen Lombardi, Kalyan Sunkavalli, Ricardo Martin-Brualla, Tomas Simon, Jason Saragih, Matthias Nießner, Rohit Pandey, Sean Fanello (+7 others)
2020 arXiv   pre-print
Concurrently, progress in computer vision and machine learning have given rise to a new approach to image synthesis and editing, namely deep generative models.  ...  into network training.  ...  Neural rendering brings the promise of addressing both reconstruction and rendering by using deep networks to learn complex mappings from captured images to novel images.  ... 
arXiv:2004.03805v1 fatcat:6qs7ddftkfbotdlfd4ks7llovq

Material Type Recognition of Indoor Scenes via Surface Reflectance Estimation

Seokyeong Lee, Dongjin Lee, Hyun-Cheol Kim, Seungkyu Lee
2021 IEEE Access  
In this work, we propose a material type recognition method based on both color and reflectance features using deep neural network.  ...  A material type is characterized well by relevant surface reflectance together with traditional visual appearance providing better description for material type recognition.  ...  [38] have suggested neural inverse rendering method which enables scene-attribute estimation from single indoor image.  ... 
doi:10.1109/access.2021.3137585 fatcat:p6pufdnphbbtneponapdovsxpe

Extracting Triangular 3D Models, Materials, and Lighting From Images [article]

Jacob Munkberg, Jon Hasselgren, Tianchang Shen, Jun Gao, Wenzheng Chen, Alex Evans, Thomas Müller, Sanja Fidler
2022 arXiv   pre-print
Unlike recent multi-view reconstruction approaches, which typically produce entangled 3D representations encoded in neural networks, we output triangle meshes with spatially-varying materials and environment  ...  We leverage recent work in differentiable rendering, coordinate-based networks to compactly represent volumetric texturing, alongside differentiable marching tetrahedrons to enable gradient-based optimization  ...  Next, we render the extracted surface mesh in a differentiable rasterizer with deferred shading, and compute loss in image space on the rendered image compared to a reference image.  ... 
arXiv:2111.12503v4 fatcat:zvcxx7txtnb3raqcdczwmu3n5i

Neural Light Field Estimation for Street Scenes with Differentiable Virtual Object Insertion [article]

Zian Wang, Wenzheng Chen, David Acuna, Jan Kautz, Sanja Fidler
2022 arXiv   pre-print
In this work, we propose a neural approach that estimates the 5D HDR light field from a single image, and a differentiable object insertion formulation that enables end-to-end training with image-based  ...  With the estimated lighting, our shadow-aware object insertion is fully differentiable, which enables adversarial training over the composited image to provide additional supervisory signal to the lighting  ...  Li, Z., Shafiei, M., Ramamoorthi, R., Sunkavalli, K., Chandraker, M.: Inverse rendering for complex indoor scenes: Shape, spatially-varying lighting and svbrdf from a single image.  ... 
arXiv:2208.09480v1 fatcat:u6zlcvbvwfbwtnqqvcixkjx3ui

Photo-to-Shape Material Transfer for Diverse Structures [article]

Ruizhen Hu, Xiangyu Su, Xiangkai Chen, Oliver Van Kaick, Hui Huang
2022 arXiv   pre-print
To accomplish this goal, our method combines an image translation neural network with a material assignment neural network.  ...  The image translation network translates the color from the exemplar to a projection of the 3D shape and the part segmentation from the projection to the exemplar.  ...  Training the networks with more images rendered under different illumination settings may improve the robustness of the method.  ... 
arXiv:2205.04018v1 fatcat:fxsq2p6fcvggdpxccfthuqwn5a
« Previous Showing results 1 — 15 out of 21 results