Filters








10 Hits in 3.1 sec

InfoGAN-CR and ModelCentrality: Self-supervised Model Training and Selection for Disentangling GANs [article]

Zinan Lin, Kiran Koshy Thekumparampil, Giulia Fanti, Sewoong Oh
2020 arXiv   pre-print
Recent advances have been dominated by Variational AutoEncoder (VAE)-based methods, while training disentangled generative adversarial networks (GANs) remains challenging.  ...  Combining contrastive regularization with ModelCentrality, we improve upon the state-of-the-art disentanglement scores significantly, without accessing the supervised data.  ...  InfoGAN-CR adds a contrastive regularizer (CR) that combines selfsupervision with the most natural measure of disentanglement: latent traversal.  ... 
arXiv:1906.06034v3 fatcat:gnmweezs4nax7fuwqej6awrtxa

Inference-InfoGAN: Inference Independence via Embedding Orthogonal Basis Expansion [article]

Hongxiang Jiang, Jihao Yin, Xiaoyan Luo, Fuxiang Wang
2021 arXiv   pre-print
To explicitly infer latent variables with inter-independence, we propose a novel GAN-based disentanglement framework via embedding Orthogonal Basis Expansion (OBE) into InfoGAN network (Inference-InfoGAN  ...  Disentanglement learning aims to construct independent and interpretable latent variables in which generative models are a popular strategy.  ...  ), InfoGAN-CR and InfoGAN-CR (model selection)).  ... 
arXiv:2110.00788v1 fatcat:yxrdtueg7fbo7dvcftt7t32zfm

Learning Disentangled Representation by Exploiting Pretrained Generative Models: A Contrastive Learning View [article]

Xuanchi Ren, Tao Yang, Yuwang Wang, Wenjun Zeng
2022 arXiv   pre-print
To discover the factors and learn disentangled representation, previous methods typically leverage an extra regularization term when learning to generate realistic images.  ...  with separate dimensions.  ...  InfoGAN-CR entangles Wall Color with Object Color and Pose.  ... 
arXiv:2102.10543v2 fatcat:o5733u4egfhsjjocoj2gerayw4

Semi-Supervised StyleGAN for Disentanglement Learning [article]

Weili Nie, Tero Karras, Animesh Garg, Shoubhik Debnath, Anjul Patney, Ankit B. Patel, Anima Anandkumar
2020 arXiv   pre-print
Disentanglement learning is crucial for obtaining disentangled representations and controllable generation.  ...  Current disentanglement methods face several inherent limitations: difficulty with high-resolution images, primarily focusing on learning disentangled representations, and non-identifiability due to the  ...  a stronger prior for disentanglement learning compared to different explicit loss regularizations in disentangled VAEs or InfoGAN-CR.  ... 
arXiv:2003.03461v3 fatcat:qjavkgece5evbku3rfon55mqr4

Where and What? Examining Interpretable Disentangled Representations [article]

Xinqi Zhu, Chang Xu, Dacheng Tao
2021 arXiv   pre-print
disentanglement.  ...  A latent code is easily to be interpreted if it would consistently impact a certain subarea of the resulting generated image.  ...  [38] equip InfoGAN with a contrastive regularizer, which detects the shared dimension in latent codes of the generated image pairs.  ... 
arXiv:2104.05622v1 fatcat:d3e7xxdojjbebdnd6h6tsl5c44

Evaluating the Disentanglement of Deep Generative Models through Manifold Topology [article]

Sharon Zhou, Eric Zelikman, Fred Lu, Andrew Y. Ng, Gunnar Carlsson, Stefano Ermon
2021 arXiv   pre-print
Learning disentangled representations is regarded as a fundamental task for improving the generalization, robustness, and interpretability of generative models.  ...  To address this, we present a method for quantifying disentanglement that only uses the generative model, by measuring the topological similarity of conditional submanifolds in the learned representation  ...  Infogan-cr: Disentangling generative adversarial networks with contrastive regularizers. ICML, 2020. Z. Liu, P. Luo, X. Wang, and X. Tang. Deep learning face attributes in the wild.  ... 
arXiv:2006.03680v5 fatcat:e76t4a5hnbhl7hyf3eyhygbega

Disentangled Generative Causal Representation Learning [article]

Xinwei Shen, Furui Liu, Hanze Dong, Qing Lian, Zhitang Chen, Tong Zhang
2021 arXiv   pre-print
The prior is then trained jointly with a generator and an encoder using a suitable GAN loss incorporated with supervision.  ...  We show that previous methods with independent priors fail to disentangle causally correlated factors.  ...  Existing methods, including InfoGAN (Chen et al., 2016) and InfoGAN-CR (Lin et al., 2020) , differ from our proposed formulation mainly in two folds.  ... 
arXiv:2010.02637v2 fatcat:tbz4xlszizgefkgzh6va7gmzku

OOGAN: Disentangling GAN with One-Hot Sampling and Orthogonal Regularization

Bingchen Liu, Yizhe Zhu, Zuohui Fu, Gerard De Melo, Ahmed Elgammal
2020 PROCEEDINGS OF THE THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE AND THE TWENTY-EIGHTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE  
Exploring the potential of GANs for unsupervised disentanglement learning, this paper proposes a novel GAN-based disentanglement framework with One-Hot Sampling and Orthogonal Regularization (OOGAN).  ...  Furthermore, we provide a brand-new perspective on designing the structure of the generator and discriminator, demonstrating that a minor structural change and an orthogonal regularization on model weights  ...  InfoGAN-CR (Lin et al. 2019 ) introduces a contrastive regularizer focusing on forming more desirable latent traversals.  ... 
doi:10.1609/aaai.v34i04.5919 fatcat:g23g5rsqijavrnn5nbo37svq5q

Cluster-guided Image Synthesis with Unconditional Models [article]

Markos Georgopoulos, James Oldfield, Grigorios G Chrysos, Yannis Panagakis
2021 arXiv   pre-print
Generative Adversarial Networks (GANs) are the driving force behind the state-of-the-art in image generation.  ...  Despite their ability to synthesize high-resolution photo-realistic images, generating content with on-demand conditioning of different granularity remains a challenge.  ...  Clustergan: Latent space clustering in similarity search with gpus. arXiv preprint arXiv:1702.08734, generative adversarial networks.  ... 
arXiv:2112.12911v1 fatcat:l3zhx7bxsjaohertladvyqbbna

Why Spectral Normalization Stabilizes GANs: Analysis and Improvements [article]

Zinan Lin, Vyas Sekar, Giulia Fanti
2021 arXiv   pre-print
Spectral normalization (SN) is a widely-used technique for improving the stability and sample quality of Generative Adversarial Networks (GANs).  ...  Our proofs illustrate a (perhaps unintentional) connection with the successful LeCun initialization.  ...  Infogan-cr: Disentangling generative adversarial net- works with contrastive regularizers. ICML, 2020. Miyato, T., Kataoka, T., Koyama, M., and Yoshida, Y.  ... 
arXiv:2009.02773v2 fatcat:3c63b6rt5zee5ood7ta4ysax4a