Filters








2,001 Hits in 3.7 sec

Many Paths to Equilibrium: GANs Do Not Need to Decrease a Divergence At Every Step [article]

William Fedus, Mihaela Rosca, Balaji Lakshminarayanan, Andrew M. Dai, Shakir Mohamed, Ian Goodfellow
2018 arXiv   pre-print
We provide empirical counterexamples to the view of GAN training as divergence minimization.  ...  This contributes to a growing body of evidence that GAN training may be more usefully viewed as approaching Nash equilibria via trajectories that do not necessarily minimize a specific divergence at each  ...  Divergence minimization is useful for understanding the outcome of training, but GAN training is not the same thing as running gradient descent on a divergence and GAN training may not encounter the same  ... 
arXiv:1710.08446v3 fatcat:hekwotzvprcrbdmyje3yyug6z4

GANs beyond divergence minimization [article]

Alexia Jolicoeur-Martineau
2018 arXiv   pre-print
This suggests that G does not need to minimize the same objective function as D maximize, nor maximize the objective of D after swapping real data with fake data (non-saturating GAN) but can instead use  ...  We observe that most loss functions converge well and provide comparable data generation quality to non-saturating GAN, LSGAN, and WGAN-GP generator loss functions, whether we use divergences or non-divergences  ...  Therefore, minimizing this loss, we have that h(D(x)) → 0 as D(x) → 1, just as saturating GAN.  ... 
arXiv:1809.02145v1 fatcat:xqovx2ehrfhmhdeumori4nmvhq

The relativistic discriminator: a key element missing from standard GAN [article]

Alexia Jolicoeur-Martineau
2018 arXiv   pre-print
We generalize both approaches to non-standard GAN loss functions and we refer to them respectively as Relativistic GANs (RGANs) and Relativistic average GANs (RaGANs).  ...  minimization, and 3) in optimal settings, SGAN would be equivalent to integral probability metric (IPM) GANs.  ...  We further generalized this approach to any GAN loss and introduced a generally more stable variant called RaD.  ... 
arXiv:1807.00734v3 fatcat:3dmk3h3iinhzlm3pqimxmw23sm

Regularizing Generative Adversarial Networks under Limited Data [article]

Hung-Yu Tseng, Lu Jiang, Ce Liu, Ming-Hsuan Yang, Weilong Yang
2021 arXiv   pre-print
We theoretically show a connection between the regularized loss and an f-divergence called LeCam-divergence, which we find is more robust under limited training data.  ...  This work proposes a regularization approach for training robust GAN models on limited data.  ...  [14] show theoretically the saturated GAN loss minimizes the JS-divergence. However, in practice, they use the non-saturated GAN for superior empirical results.  ... 
arXiv:2104.03310v1 fatcat:b5th6vdcafgc5ostnr64xdzpte

On the Effectiveness of Least Squares Generative Adversarial Networks [article]

Xudong Mao, Qing Li, Haoran Xie, Raymond Y.K. Lau, Zhen Wang, Stephen Paul Smolley
2018 arXiv   pre-print
We show that minimizing the objective function of LSGAN yields minimizing the Pearson χ^2 divergence.  ...  Regular GANs hypothesize the discriminator as a classifier with the sigmoid cross entropy loss function.  ...  Based on the quantitative experiments, we find that the derived objective function that yields minimizing the Pearson χ 2 divergence performs better than the classical one of using least squares for classification  ... 
arXiv:1712.06391v2 fatcat:6mxtnbafgragvozvjhpcpmgaq4

Improved Training of Generative Adversarial Networks Using Representative Features [article]

Duhyeon Bang, Hyunjung Shim
2018 arXiv   pre-print
Because the AE learns to minimize forward KL divergence, our GAN training with representative features is influenced by both reverse and forward KL divergence.  ...  Focusing on the fact that standard GAN minimizes reverse Kullback-Leibler (KL) divergence, we transfer the representative feature, which is extracted from the data distribution using a pre-trained autoencoder  ...  The proposed RFGAN optimizes reverse KL divergence because the framework is built upon a non-saturated GAN.  ... 
arXiv:1801.09195v3 fatcat:k5yhell74badvcfexbe4qoy63a

MGGAN: Solving Mode Collapse using Manifold Guided Training [article]

Duhyeon Bang, Hyunjung Shim
2018 arXiv   pre-print
Experimental analysis justifies that the proposed algorithm is an effective and efficient tool for training GANs.  ...  Mode collapse is a critical problem in training generative adversarial networks.  ...  [5] point out that the reverse KL-divergence is vulnerable to mode collapse in non-saturated GAN.  ... 
arXiv:1804.04391v1 fatcat:fottf6doijgtxlwbkvuc6tstly

Sample weighting as an explanation for mode collapse in generative adversarial networks [article]

Aksel Wilhelm Wold Eide, Eilif Solberg, Ingebjørg Kåsen
2020 arXiv   pre-print
Generative adversarial networks were introduced with a logistic MiniMax cost formulation, which normally fails to train due to saturation, and a Non-Saturating reformulation.  ...  We design MM-nsat, which preserves MM-GAN sample weighting while avoiding saturation by rescaling the MM-GAN minibatch gradient such that its magnitude approximates NS-GAN's gradient magnitude.  ...  We see that our version of minimax non-saturation trains well.  ... 
arXiv:2010.02035v1 fatcat:yfbyz77bozg2pg22demd6oh53q

Generative Adversarial Networks (GANs): What it can generate and What it cannot? [article]

P Manisha, Sujit Gujar
2019 arXiv   pre-print
These models suffer from issues like mode collapse, non-convergence, and instability during training.  ...  With a straightforward implementation and outstanding results, GANs have been used for numerous applications. Despite the success, GANs lack a proper theoretical explanation.  ...  On Convergence and Stability of GANs The authors in [12] view GANs objective as Regret minimization as opposed to divergence minimization.  ... 
arXiv:1804.00140v2 fatcat:4l64cjgtenhl7dipkjz3ssygdy

Smoothness and Stability in GANs [article]

Casey Chu, Kentaro Minami, Kenji Fukumizu
2020 arXiv   pre-print
In particular, we derive conditions that guarantee eventual stationarity of the generator when it is trained with gradient descent, conditions that must be satisfied by the divergence that is minimized  ...  Generative adversarial networks, or GANs, commonly display unstable behavior during training.  ...  Meanwhile, the non-saturating GAN (Goodfellow et al., 2014) has been shown to minimize a certain Kullback-Leibler divergence .  ... 
arXiv:2002.04185v1 fatcat:u6haramkrfbbfd27ihom2kix7e

Parametric Adversarial Divergences are Good Task Losses for Generative Modeling [article]

Gabriel Huang, Hugo Berard, Ahmed Touati, Gauthier Gidel, Pascal Vincent, Simon Lacoste-Julien
2018 arXiv   pre-print
We use two common divergences to train a generator and show that the parametric divergence outperforms the nonparametric divergence on both the qualitative and the quantitative task.  ...  We refer to those task losses as parametric adversarial divergences and we give two main reasons why we think parametric divergences are good learning objectives for generative modeling.  ...  For that purpose, we adopt the view 2 that training a GAN can be seen as training an implicit generator to minimize a special type of task loss, which is a parametric (adversarial) divergence: Div(p||q  ... 
arXiv:1708.02511v3 fatcat:zvdrmpat6nhvvkydnuwbhe2vam

A Review on Generative Adversarial Networks: Algorithms, Theory, and Applications [article]

Jie Gui, Zhenan Sun, Yonggang Wen, Dacheng Tao, Jieping Ye
2020 arXiv   pre-print
Furthermore, GANs have been combined with other machine learning algorithms for specific applications, such as semi-supervised learning, transfer learning, and reinforcement learning.  ...  This paper compares the commonalities and differences of these GANs methods. Secondly, theoretical issues related to GANs are investigated.  ...  ACKNOWLEDGMENTS The authors would like to thank the NetEase course taught by Shuang Yang, Ian Good fellow's invited talk at AAAI 19, CVPR 2018 tutorial on GANs, Sebastian Nowozin's MLSS 2018 GAN lecture  ... 
arXiv:2001.06937v1 fatcat:4iqb2vnhezgjnphfv3taej7vbu

Least Squares Generative Adversarial Networks [article]

Xudong Mao, Qing Li, Haoran Xie, Raymond Y.K. Lau, Zhen Wang and Stephen Paul Smolley
2017 arXiv   pre-print
We show that minimizing the objective function of LSGAN yields minimizing the Pearson χ^2 divergence. There are two benefits of LSGANs over regular GANs.  ...  Regular GANs hypothesize the discriminator as a classifier with the sigmoid cross entropy loss function.  ...  Reltation to f-divergence In the original GAN paper [7] , the authors has shown that minimizing Equation 1 yields minimizing the Jensen-Shannon divergence: (4) .  ... 
arXiv:1611.04076v3 fatcat:bdgwe37aabbhnfdpwqzcfg3uqm

Evolutionary Generative Adversarial Networks [article]

Chaoyue Wang, Chang Xu, Xin Yao, Dacheng Tao
2018 arXiv   pre-print
However, existing GANs (GAN and its variants) tend to suffer from training problems such as instability and mode collapse.  ...  Unlike existing GANs, which employ a pre-defined adversarial objective function alternately training a generator and a discriminator, we utilize different adversarial training objectives as mutation operations  ...  The original GAN uses Jensen-Shannon divergence as the metric.  ... 
arXiv:1803.00657v1 fatcat:ngoz424hcrhtxc4yddyhw4sx2e

Properties of f-divergences and f-GAN training [article]

Matt Shannon
2020 arXiv   pre-print
In this technical report we describe some properties of f-divergences and f-GAN training. We present an elementary derivation of the f-divergence lower bounds which form the basis of f-GAN training.  ...  We derive informative but perhaps underappreciated properties of f-divergences and f-GAN training, including a gradient matching property and the fact that all f-divergences agree up to an overall scale  ...  This is the divergence approximately minimized by conventional non-saturating GAN training (Shannon et al., 2020 ). 8 (1+u) 3 (0, 0) Pearson χ 2 1 (0, 3) Neymann u −3 (3, 0) softened reverse KL 2 u 2  ... 
arXiv:2009.00757v1 fatcat:xh3mkhck3vbuzgo6nmjzxtvybq
« Previous Showing results 1 — 15 out of 2,001 results