A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Filters
Unsupervised Discovery of Interpretable Directions in the GAN Latent Space
[article]
2020
arXiv
pre-print
In this paper, we introduce an unsupervised method to identify interpretable directions in the latent space of a pretrained GAN model. ...
The latent spaces of GAN models often have semantically meaningful directions. ...
We propose the first unsupervised approach for the discovery of semantically meaningful directions in the GAN latent space. ...
arXiv:2002.03754v3
fatcat:iq6pttabwvavlc5qbw6ntochli
LatentCLR: A Contrastive Learning Approach for Unsupervised Discovery of Interpretable Directions
[article]
2021
arXiv
pre-print
Recent research has shown that it is possible to find interpretable directions in the latent spaces of pre-trained Generative Adversarial Networks (GANs). ...
In this work, we propose a contrastive learning-based approach to discover semantic directions in the latent space of pre-trained GANs in a self-supervised manner. ...
We also acknowledge the support of NVIDIA Corporation through the donation of the TITAN X GPU and GCP research credits from Google. We thank to Irem Simsar for proof-reading our paper. ...
arXiv:2104.00820v2
fatcat:v7gcu5mjureyfa5o2uvvfjk6pm
Closed-Form Factorization of Latent Semantics in GANs
[article]
2021
arXiv
pre-print
A rich set of interpretable dimensions has been shown to emerge in the latent space of the Generative Adversarial Networks (GANs) trained for synthesizing images. ...
In particular, we take a closer look into the generation mechanism of GANs and further propose a closed-form factorization algorithm for latent semantic discovery by directly decomposing the pre-trained ...
The crux of interpreting the latent space of GANs is to find the meaningful directions in the latent space corresponding to the human-understandable concepts [7, 15, 24, 22, 27] . ...
arXiv:2007.06600v4
fatcat:y2jtbcp345gsvad6dec7lbv6ou
Interpreting Generative Adversarial Networks for Interactive Image Generation
[article]
2022
arXiv
pre-print
This chapter gives a summary of recent works on interpreting deep generative models. The methods are categorized into the supervised, the unsupervised, and the embedding-guided approaches. ...
Significant progress has been made by the advances in Generative Adversarial Networks (GANs) for image generation. ...
[7] perform PCA on the sam-pled data to find primary directions in the latent space. ...
arXiv:2108.04896v2
fatcat:4hudnsqpyzexhi66vcbzbwxyfi
WarpedGANSpace: Finding non-linear RBF paths in GAN latent space
[article]
2021
arXiv
pre-print
This work addresses the problem of discovering, in an unsupervised manner, interpretable paths in the latent space of pretrained GANs, so as to provide an intuitive and easy way of controlling the underlying ...
In doing so, it addresses some of the limitations of the state-of-the-art works, namely, a) that they discover directions that are independent of the latent code, i.e., paths that are linear, and b) that ...
RPGAN: gans interpretability
via random routing. CoRR, abs/1912.10920, 2019.
[34] A. Voynov and A. Babenko. Unsupervised discovery of in-
terpretable directions in the GAN latent space. ...
arXiv:2109.13357v1
fatcat:cnbdieg4rnaobfin4fmvyw2rpm
Cluster-guided Image Synthesis with Unconditional Models
[article]
2021
arXiv
pre-print
Generative Adversarial Networks (GANs) are the driving force behind the state-of-the-art in image generation. ...
In this work, we focus on controllable image generation by leveraging GANs that are well-trained in an unsupervised fashion. ...
Unsupervised discovery
of interpretable directions in the gan latent space. arXiv
preprint arXiv:2002.03754, 2020. 2
[40] Xueting Yan, Ishan Misra, Abhinav Gupta, Deepti Ghadi-
yaram, and ...
arXiv:2112.12911v1
fatcat:l3zhx7bxsjaohertladvyqbbna
Disentangled Representations from Non-Disentangled Models
[article]
2021
arXiv
pre-print
The dominating paradigm of unsupervised disentanglement is currently to train a generative model that separates different factors of variation in its latent space. ...
Constructing disentangled representations is known to be a difficult task, especially in the unsupervised scenario. ...
First, we search for a set of k orthogonal interpretable directions in the latent space of the pretrained GAN in an unsupervised manner. ...
arXiv:2102.06204v1
fatcat:yeg24kjkkzdnhhcxv4igm5mz3u
Unsupervised Representation Adversarial Learning Network: from Reconstruction to Generation
[article]
2019
arXiv
pre-print
In theory, we minimize the upper bound of the two conditional entropy loss between the latent variables and the observations together to achieve the cycle consistency. ...
This paper aims at learning a disentangled representation effective for all of them in an unsupervised way. ...
But it does not learn a disentangled latent space for semantic interpretation and knowledge discovery. ...
arXiv:1804.07353v2
fatcat:t3qolvtdbbhxhmk45cjwrbdxky
GAN "Steerability" without optimization
[article]
2021
arXiv
pre-print
Recent research has shown remarkable success in revealing "steering" directions in the latent spaces of pre-trained GANs. ...
This applies to user-prescribed geometric transformations, as well as to unsupervised discovery of more complex effects. ...
Unsupervised discovery of interpretable directions in the gan
latent space. arXiv preprint arXiv:2002.03754, 2020.
Tom White. Sampling generative networks. arXiv preprint arXiv:1609.04468, 2016. ...
arXiv:2012.05328v2
fatcat:azj2jj7zj5fhfpeqtjzf3juzd4
Enjoy Your Editing: Controllable GANs for Image Editing via Latent Space Navigation
[article]
2021
arXiv
pre-print
Classic approaches for this task use a Generative Adversarial Net (GAN) to learn a latent space and suitable latent-space transformations. ...
loss that encourages the maintenance of image identity and photo-realism. ...
Note that work by Voynov & Babenko (2020) is unsupervised, i.e., the latent-space directions are human interpreted. ...
arXiv:2102.01187v3
fatcat:jsprls7hcjcv7bu6i33vszpbia
Fantastic Style Channels and Where to Find Them: A Submodular Framework for Discovering Diverse Directions in GANs
[article]
2022
arXiv
pre-print
The discovery of interpretable directions in the latent spaces of pre-trained GAN models has recently become a popular topic. ...
In this study, we design a novel submodular framework that finds the most representative and diverse subset of directions in the latent space of StyleGAN2. ...
directions. [36] uses a self-supervised contrastive learning based method to discover interpretable directions in the latent space of pre-trained BigGAN and StyleGAN2 models. ...
arXiv:2203.08516v2
fatcat:svome3s37vh3rluiv5yroxthrm
The Geometry of Deep Generative Image Models and its Applications
[article]
2021
arXiv
pre-print
GAN inversion) and facilitates unsupervised discovery of interpretable axes. ...
This geometric understanding unifies key previous results related to GAN interpretability. We show that the use of this metric allows for more efficient optimization in the latent space (e.g. ...
We thank Hao Sun (CUHK) in providing experience for the submission and rebuttal process. ...
arXiv:2101.06006v2
fatcat:ngg63aohlbc47elmt353f6elbu
Unsupervised Primitive Discovery for Improved 3D Generative Modeling
2019
2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
3D shape generation is a challenging problem due to the high-dimensional output space and complex part configurations of real-world objects. ...
To this end, we introduce an unsupervised primitive discovery algorithm based on a higher-order conditional random field model. ...
, our model is jointly trained on all shape classes, (b) It provides better interpretability of generator's latent space and can incorporate user input to generate desired shapes, (c) The learned model ...
doi:10.1109/cvpr.2019.00997
dblp:conf/cvpr/KhanGHB19
fatcat:bepx6sunpjduvhrgha5ubpvzki
Quantitative comparison of principal component analysis and unsupervised deep learning using variational autoencoders for shape analysis of motile cells
[article]
2020
bioRxiv
pre-print
Furthermore, by including cell speed into the training of the VAE-GAN, we were able to incorporate cell shape and speed into the same latent space. ...
Contrary to the conventional viewpoint that the latent space is a "black box", we demonstrated that the information learned and encoded within the latent space is consistent with PCA and is reproducible ...
latent space have no direct physical meaning and are not arranged in any meaningful 408 order, interpreting the biological significance of positional variation within the latent 409 space is much less ...
doi:10.1101/2020.06.26.174474
fatcat:ovabyy3vj5eoteyrr7m3wkhdza
Learning Disentangled Representation by Exploiting Pretrained Generative Models: A Contrastive Learning View
[article]
2022
arXiv
pre-print
For the generative models pretrained without any disentanglement term, the generated images show semantically meaningful variations when traversing along different directions in the latent space. ...
DisCo achieves the state-of-the-art disentangled representation learning and distinct direction discovering, given pretrained non-disentangled generative models including GAN, VAE, and Flow. ...
method to explore interpretable directions in the latent space of a pretrained GAN. ...
arXiv:2102.10543v2
fatcat:o5733u4egfhsjjocoj2gerayw4
« Previous
Showing results 1 — 15 out of 891 results