170 Hits in 5.1 sec

Improving Inversion and Generation Diversity in StyleGAN using a Gaussianized Latent Space [article]

Jonas Wulff, Antonio Torralba
2020 arXiv   pre-print
Furthermore, the Gaussian model of the distribution in latent space allows us to investigate the origins of artifacts in the generator output, and provides a method for reducing these artifacts while maintaining  ...  This yields a simple Gaussian prior, which we use to regularize the projection of images into the latent space.  ...  Improving image inversion using the Gaussian prior The goal of inversion is to find a point in latent space from which a given (real or generated) image can be reconstructed by the generator as accurately  ... 
arXiv:2009.06529v1 fatcat:d6uqys2ugzfp3b4kuphgljd5ca

StyleGAN of All Trades: Image Manipulation with Only Pretrained StyleGAN [article]

Min Jin Chong, Hsin-Ying Lee, David Forsyth
2021 arXiv   pre-print
Recently, StyleGAN has enabled various image manipulation and editing tasks thanks to the high-quality generation and the disentangled latent space.  ...  In this work, we take a deeper look at the spatial properties of StyleGAN.  ...  Applying a Gaussian prior improves the stability of GAN inversion.  ... 
arXiv:2111.01619v1 fatcat:mh2bscey3vfzjarbljryektcka

AE-StyleGAN: Improved Training of Style-Based Auto-Encoders [article]

Ligong Han, Sri Harsha Musunuri, Martin Renqiang Min, Ruijiang Gao, Yu Tian, Dimitris Metaxas
2021 arXiv   pre-print
StyleGANs have shown impressive results on data generation and manipulation in recent years, thanks to its disentangled style latent space.  ...  In this paper, we focus on style-based generators asking a scientific question: Does forcing such a generator to reconstruct real data lead to more disentangled latent space and make the inversion process  ...  In-domain GAN inversion. In-domain GAN inversion [39] aims to learn a mapping from images to latent space.  ... 
arXiv:2110.08718v1 fatcat:dseo36a3xjcg7ab6qo5j75skbq

StyleGAN-XL: Scaling StyleGAN to Large Diverse Datasets [article]

Axel Sauer, Katja Schwarz, Andreas Geiger
2022 arXiv   pre-print
StyleGAN in particular sets new standards for generative modeling regarding image quality and controllability.  ...  Our final model, StyleGAN-XL, sets a new state-of-the-art on large-scale image synthesis and is the first to generate images at a resolution of 1024^2 at such a dataset scale.  ...  We would like to thank Kashyap Chitta, Michael Niemeyer, and Božidar Antić for proofreading. Lastly, we would like to thank Vanessa Sauer for her general support.  ... 
arXiv:2202.00273v2 fatcat:ecxudyms7bf77d7jwwxro362tm

State-of-the-Art in the Architecture, Methods and Applications of StyleGAN [article]

Amit H. Bermano and Rinon Gal and Yuval Alaluf and Ron Mokady and Yotam Nitzan and Omer Tov and Or Patashnik and Daniel Cohen-Or
2022 arXiv   pre-print
Seeking to bring StyleGAN's latent control to real-world scenarios, the study of GAN inversion and latent space embedding has quickly gained in popularity.  ...  We further elaborate on the visual priors StyleGAN constructs, and discuss their use in downstream discriminative tasks.  ...  Using sampled latent codes and their corresponding images, the authors jointly learn a latent representation used for latent-space editing, and a Spatial Transformer Network to align the generated images  ... 
arXiv:2202.14020v1 fatcat:qu3plbdnszdujcwxwq3zizysje

InvGAN: Invertible GANs [article]

Partha Ghosh, Dominik Zietlow, Michael J. Black, Larry S. Davis, Xiaochen Hu
2021 arXiv   pre-print
Our InvGAN, short for Invertible GAN, successfully embeds real images to the latent space of a high quality generative model.  ...  StyleGAN) specific. These methods are nontrivial to extend to novel datasets or architectures. We propose a general framework that is agnostic to architecture and datasets.  ...  Acknowledgments and Disclosure of Funding We thank Alex Vorobiov, Javier Romero, Betty Mohler Tesch and Soubhik Sanyal for their insightful comments and intriguing discussions.  ... 
arXiv:2112.04598v2 fatcat:ivzs54nkmzacxjwo66bmcrasuy

GAN Inversion for Out-of-Range Images with Geometric Transformations [article]

Kyoungkook Kang, Seongtae Kim, Sunghyun Cho
2021 arXiv   pre-print
We also propose a regularized inversion method to find a solution that supports semantic editing in the alternative space.  ...  To find a latent code that is semantically editable, BDInvert inverts an input out-of-range image into an alternative latent space than the original latent space.  ...  In addition, to enhance the diversity of synthesized images, both StyleGAN and StyleGAN2 utilize noise randomly sampled from a Gaussian distribution for each image generation.  ... 
arXiv:2108.08998v1 fatcat:bfuxzhlgenfxraud4drmhhpd5m

Towards Universal Texture Synthesis by Combining Texton Broadcasting with Noise Injection in StyleGAN-2 [article]

Jue Lin, Gaurav Sharma, Thrasyvoulos N. Pappas
2022 arXiv   pre-print
We present a new approach for universal texture synthesis by incorporating a multi-scale texton broadcasting module in the StyleGAN-2 framework.  ...  To train and evaluate the proposed approach, we construct a comprehensive high-resolution dataset that captures the diversity of natural textures as well as stochastic variations within each perceptually  ...  We then applied the Wasserstein distance combined with adding Gaussian noise (σ = 0.01) to the discriminator input and found that it improves diversity as shown in Fig. 6 .  ... 
arXiv:2203.04221v1 fatcat:vgthg7rxazda7gw4lwsc5c66aa

InterFaceGAN: Interpreting the Disentangled Face Representation Learned by GANs [article]

Yujun Shen, Ceyuan Yang, Xiaoou Tang, Bolei Zhou
2020 arXiv   pre-print
in the latent space.  ...  Although Generative Adversarial Networks (GANs) have made significant progress in face synthesis, there lacks enough understanding of what GANs have learned in the latent representation to map a random  ...  By contrast, StyleGAN proposed a style-based generator, which first maps the latent code from latent space Z to a disentangled latent space W before applying it for generation.  ... 
arXiv:2005.09635v2 fatcat:ahflrw2fkjafnp4fjjymo7cjae

GAN Inversion: A Survey [article]

Weihao Xia, Yulun Zhang, Yujiu Yang, Jing-Hao Xue, Bolei Zhou, Ming-Hsuan Yang
2022 arXiv   pre-print
GAN inversion aims to invert a given image back into the latent space of a pretrained GAN model, for the image to be faithfully reconstructed from the inverted code by the generator.  ...  Meanwhile, GAN inversion also provides insights on the interpretation of GAN's latent space and how the realistic images can be generated.  ...  In [150] , Endo et al.assume pixels sharing the same semantics have similar StyleGAN features to generate images and corresponding pseudosemantic masks from random noise in the latent space, and use a  ... 
arXiv:2101.05278v5 fatcat:ff3evb2nv5ezzaxju2cucbizde

StyleGAN-NADA: CLIP-Guided Domain Adaptation of Image Generators [article]

Rinon Gal, Or Patashnik, Haggai Maron, Gal Chechik, Daniel Cohen-Or
2021 arXiv   pre-print
These demonstrate the effectiveness of our approach and show that our shifted models maintain the latent-space properties that make generative models appealing for downstream tasks.  ...  We show that through natural language prompts and a few minutes of training, our method can adapt a generator across a multitude of domains characterized by diverse styles and shapes.  ...  Acknowledgments We thank Yuval Alaluf, Ron Mokady and Ethan Fetaya for reviewing early drafts and helpful suggestions.  ... 
arXiv:2108.00946v2 fatcat:lnn4ydsoenauxbpu6ijpm3ccn4

Designing an Encoder for StyleGAN Image Manipulation [article]

Omer Tov, Yuval Alaluf, Yotam Nitzan, Or Patashnik, Daniel Cohen-Or
2021 arXiv   pre-print
In this paper, we carefully study the latent space of StyleGAN, the state-of-the-art unconditional generator.  ...  We identify and analyze the existence of a distortion-editability tradeoff and a distortion-perception tradeoff within the StyleGAN latent space.  ...  Beyond its phenomenal realism, StyleGAN uses a learnt intermediate latent space, W, which more faithfully reflects the distribution of the training data compared to the standard Gaussian latent space.  ... 
arXiv:2102.02766v1 fatcat:d7znjrucrzfzhnovv2mvwuyy4i

Black-Box Diagnosis and Calibration on GAN Intra-Mode Collapse: A Pilot Study [article]

Zhenyu Wu, Zhaowen Wang, Ye Yuan, Jianming Zhang, Zhangyang Wang, Hailin Jin
2021 arXiv   pre-print
Our study reveals that the intra-mode collapse is still a prevailing problem in state-of-the-art GANs and the mode collapse is diagnosable and calibratable in black-box settings.  ...  Existing diversity tests of samples from GANs are usually conducted qualitatively on a small scale, and/or depends on the access to original training data as well as the trained model parameters.  ...  This assumption is mild and well observed in practice. Approach #1: Reshaping Latent Space via Gaussian Mixture Models.  ... 
arXiv:2107.12202v1 fatcat:dsk7nfhxz5edzirhwhaalntwlq

CtlGAN: Few-shot Artistic Portraits Generation with Contrastive Transfer Learning [article]

Yue Wang, Ran Yi, Ying Tai, Chengjie Wang, Lizhuang Ma
2022 arXiv   pre-print
We adapt a pretrained StyleGAN in the source domain to a target artistic domain with no more than 10 artistic faces.  ...  Generating artistic portraits is a challenging problem in computer vision.  ...  In this paper, we focus on StyleGAN, where a Z space latent code is first translated into an intermediate W space by a mapping network, and then used to control the generator via AdaIN blocks [15] and  ... 
arXiv:2203.08612v1 fatcat:eqrot5yew5hd5iftuqofdqftpe

One Model to Reconstruct Them All: A Novel Way to Use the Stochastic Noise in StyleGAN [article]

Christian Bartz, Joseph Bethge, Haojin Yang, Christoph Meinel
2020 arXiv   pre-print
Different works have improved the limited understanding of the latent space of GANs by embedding images into specific GAN architectures to reconstruct the original images.  ...  Generative Adversarial Networks (GANs) have achieved state-of-the-art performance for several image generation and manipulation tasks.  ...  Acknowledgement We wish to thank the Wildenstein Plattner Institute for supplying us with data and a research objective that ultimately led to the results published in this paper.  ... 
arXiv:2010.11113v1 fatcat:rwwygcfplfgypbrsfzfftqln5y
« Previous Showing results 1 — 15 out of 170 results