Filters








1,853 Hits in 4.3 sec

Multi-View Consistent Generative Adversarial Networks for 3D-aware Image Synthesis [article]

Xuanmeng Zhang, Zhedong Zheng, Daiheng Gao, Bang Zhang, Pan Pan, Yi Yang
2022 arXiv   pre-print
To address this challenge, we propose Multi-View Consistent Generative Adversarial Networks (MVCGAN) for high-quality 3D-aware image synthesis with geometry constraints.  ...  3D-aware image synthesis aims to generate images of objects from multiple views by learning a 3D representation.  ...  We propose a multi-view consistent generative model (MVCGAN) for high-quality 3D-aware image synthesis.  ... 
arXiv:2204.06307v1 fatcat:xwq2oq34brhzlisvrjlj6dcs74

pi-GAN: Periodic Implicit Generative Adversarial Networks for 3D-Aware Image Synthesis [article]

Eric R. Chan, Marco Monteiro, Petr Kellnhofer, Jiajun Wu, Gordon Wetzstein
2021 arXiv   pre-print
We propose a novel generative model, named Periodic Implicit Generative Adversarial Networks (π-GAN or pi-GAN), for high-quality 3D-aware image synthesis. π-GAN leverages neural representations with periodic  ...  The proposed approach obtains state-of-the-art results for 3D-aware image synthesis with multiple real and synthetic datasets.  ...  Thanks to Matthew Chan for fruitful discussions and to Stanford HAI for AWS Cloud Credits. J.W. was supported by the Samsung Global Research Award and Autodesk.  ... 
arXiv:2012.00926v2 fatcat:scgeqv3a4ngh7n4qzu2tkst5aq

Transformation-Grounded Image Generation Network for Novel 3D View Synthesis

Eunbyung Park, Jimei Yang, Ersin Yumer, Duygu Ceylan, Alexander C. Berg
2017 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)  
We present a transformation-grounded image generation network for novel 3D view synthesis from a single image.  ...  In addition to the new network structure, training with a combination of adversarial and perceptual loss results in a reduction in common artifacts of novel view synthesis such as distortions and holes  ...  We would like to thank Weilin Sun, Guilin Liu, True Price, and Dinghuang Ji for helpful discussions. We thank NVIDIA for providing GPUs and acknowledge support from NSF 1452851, 1526367.  ... 
doi:10.1109/cvpr.2017.82 dblp:conf/cvpr/ParkYYCB17 fatcat:ltj2uqgrvje65lpo7h2fozbdze

VoLux-GAN: A Generative Model for 3D Face Synthesis with HDRI Relighting [article]

Feitong Tan, Sean Fanello, Abhimitra Meka, Sergio Orts-Escolano, Danhang Tang, Rohit Pandey, Jonathan Taylor, Ping Tan, Yinda Zhang
2022 arXiv   pre-print
We propose VoLux-GAN, a generative framework to synthesize 3D-aware faces with convincing relighting.  ...  Multiple experiments and comparisons with other generative frameworks show how our model is a step forward towards photorealistic relightable 3D generative models.  ...  Many recent approaches incorporated the use of geometry and its multi-view consistency to allow for 3D aware synthesis. [2, 11, 12, 21, 30, 40, 41, 47, 71, 72] .  ... 
arXiv:2201.04873v1 fatcat:hkxwjzghvzanhgtw7vardumrqe

GRAF: Generative Radiance Fields for 3D-Aware Image Synthesis [article]

Katja Schwarz, Yiyi Liao, Michael Niemeyer, Andreas Geiger
2021 arXiv   pre-print
While 2D generative adversarial networks have enabled high-resolution image synthesis, they largely lack an understanding of the 3D world and the image formation process.  ...  Our experiments reveal that radiance fields are a powerful representation for generative image synthesis, leading to 3D consistent models that render with high fidelity.  ...  We thank the International Max Planck Research School for Intelligent Systems (IMPRS-IS) for supporting Katja Schwarz and Michael Niemeyer. This work was supported by an NVIDIA research gift.  ... 
arXiv:2007.02442v4 fatcat:ml7ta2lkmvg6joiwetz7zh7hmy

Transformation-Grounded Image Generation Network for Novel 3D View Synthesis [article]

Eunbyung Park, Jimei Yang, Ersin Yumer, Duygu Ceylan, Alexander C. Berg
2017 arXiv   pre-print
We present a transformation-grounded image generation network for novel 3D view synthesis from a single image.  ...  In addition to the new network structure, training with a combination of adversarial and perceptual loss results in a reduction in common artifacts of novel view synthesis such as distortions and holes  ...  We would like to thank Weilin Sun, Guilin Liu, True Price, and Dinghuang Ji for helpful discussions. We thank NVIDIA for providing GPUs and acknowledge support from NSF 1452851, 1526367.  ... 
arXiv:1703.02921v1 fatcat:acvd3262zrg7ra7rwajo3tg65q

Unsupervised Novel View Synthesis from a Single Image [article]

Pierluigi Zama Ramirez, Diego Martin Arroyo, Alessio Tonioni, Federico Tombari
2021 arXiv   pre-print
Novel view synthesis from a single image has recently achieved remarkable results, although the requirement of some form of 3D, pose, or multi-view supervision at training time limits the deployment in  ...  We first pre-train a purely generative decoder model using a 3D-aware GAN formulation while at the same time train an encoder network to invert the mapping from latent space to images.  ...  We build on these recent findings and extend 3D aware generative models to perform novel view synthesis from natural images.  ... 
arXiv:2102.03285v2 fatcat:y46fnz6x4zfn7fv5zmjrlsuism

FENeRF: Face Editing in Neural Radiance Fields [article]

Jingxiang Sun, Xuan Wang, Yong Zhang, Xiaoyu Li, Qi Zhang, Yebin Liu, Jue Wang
2022 arXiv   pre-print
Previous portrait image generation methods roughly fall into two categories: 2D GANs and 3D-aware GANs. 2D GANs can generate high fidelity portraits but with low view consistency. 3D-aware GAN methods  ...  To overcome these limitations, we propose FENeRF, a 3D-aware generator that can produce view-consistent and locally-editable portrait images.  ...  Moreover, we show that our semantic rendering has better view consistency. 3D-Aware Image Synthesis.  ... 
arXiv:2111.15490v2 fatcat:qtqaijbwhncn7g3ncwjraehr6q

Table of Contents

2021 IEEE transactions on multimedia  
Depth-Preserving Latent Generative Adversarial Network for 3D Reconstruction . . . . . . . . . . . . . . . . . . . . . . . . . .  ...  Jiang Display Technology for Multimedia Learning to Generate Multi-Exposure Stacks With Cycle Consistency for High Dynamic Range Imaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  ... 
doi:10.1109/tmm.2021.3132246 fatcat:el7u2udtybddrpbl5gxkvfricy

3D-aware Image Synthesis via Learning Structural and Textural Representations [article]

Yinghao Xu, Sida Peng, Ceyuan Yang, Yujun Shen, Bolei Zhou
2022 arXiv   pre-print
Recent attempts equip a Generative Adversarial Network (GAN) with a Neural Radiance Field (NeRF), which maps 3D coordinates to pixel values, as a 3D prior.  ...  Making generative models 3D-aware bridges the 2D image space and the 3D physical world yet remains challenging.  ...  However, these efforts control the generation only in 2D space and ignore the 3D nature of the physical world, resulting in a lack of consistency for view synthesis. 3D-Aware Image Synthesis. 2D GANs lack  ... 
arXiv:2112.10759v2 fatcat:h7palqu2ezaehp66cccypxhuka

2021 Index IEEE Transactions on Image Processing Vol. 30

2021 IEEE Transactions on Image Processing  
The Author Index contains the primary entry for each item, listed under the first author's name.  ...  ., +, TIP 2021 853-867 Generative Partial Multi-View Clustering With Adaptive Fusion and Cycle Consistency.  ...  ., +, TIP 2021 670-684 Multi-Sentence Auxiliary Adversarial Networks for Fine-Grained Text-to-Image Synthesis.  ... 
doi:10.1109/tip.2022.3142569 fatcat:z26yhwuecbgrnb2czhwjlf73qu

Realistic Image Synthesis with Configurable 3D Scene Layouts [article]

Jaebong Jeong, Janghun Jo, Jingdong Wang, Sunghyun Cho, Jaesik Park
2021 arXiv   pre-print
With the trained painting network, realistic-looking images for the input 3D scene can be rendered and manipulated.  ...  To train the painting network without 3D color supervision, we exploit an off-the-shelf 2D semantic image synthesis method.  ...  as novel view synthesis, texture reconstruction, and 3D-aware generative models.  ... 
arXiv:2108.10031v2 fatcat:j435aam32jfxtg46j7jmtaxfwe

2021 Index IEEE Transactions on Multimedia Vol. 23

2021 IEEE transactions on multimedia  
The Author Index contains the primary entry for each item, listed under the first author's name.  ...  ., +, TMM 2021 1426-1441 Image sensors DLGAN: Depth-Preserving Latent Generative Adversarial Network for 3D Reconstruction.  ...  ., +, TMM 2021 3059-3072 DLGAN: Depth-Preserving Latent Generative Adversarial Network for 3D Reconstruction.  ... 
doi:10.1109/tmm.2022.3141947 fatcat:lil2nf3vd5ehbfgtslulu7y3lq

3D-Aware Video Generation [article]

Sherwin Bahmani, Jeong Joon Park, Despoina Paschalidou, Hao Tang, Gordon Wetzstein, Leonidas Guibas, Luc Van Gool, Radu Timofte
2022 arXiv   pre-print
With our work, we explore 4D generative adversarial networks (GANs) that learn unconditional generation of 3D-aware videos.  ...  Generative models have emerged as an essential building block for many image synthesis and editing tasks.  ...  GAN-based Image Synthesis Generative Adversarial Networks (GANs) [25] have demonstrated impressive results on multiple synthesis tasks such as image generation [7, 16, 35, 44, 45] , image editing [  ... 
arXiv:2206.14797v1 fatcat:66yji7u7gvbvfmnmnkb56g7fem

DeepVoxels: Learning Persistent 3D Feature Embeddings [article]

Vincent Sitzmann, Justus Thies, Felix Heide, Matthias Nießner, Gordon Wetzstein, Michael Zollhöfer
2019 arXiv   pre-print
In this work, we address the lack of 3D understanding of generative neural networks by introducing a persistent 3D feature embedding for view synthesis.  ...  Our approach combines insights from 3D geometric computer vision with recent advances in learning image-to-image mappings based on adversarial loss functions.  ...  A very prominent direction is generative adversarial networks [13] which achieve impressive results for image generation, even at high resolutions [26] or conditional generative tasks [20] .  ... 
arXiv:1812.01024v2 fatcat:swnaep7shfarladgcmmruma5bi
« Previous Showing results 1 — 15 out of 1,853 results