Filters








6 Hits in 2.4 sec

Coulomb GANs: Provably Optimal Nash Equilibria via Potential Fields [article]

Thomas Unterthiner, Bernhard Nessler, Calvin Seward, Günter Klambauer, Martin Heusel, Hubert Ramsauer, Sepp Hochreiter
2018 arXiv   pre-print
We prove that Coulomb GANs possess only one Nash equilibrium which is optimal in the sense that the model distribution equals the target distribution.  ...  We introduce Coulomb GANs, which pose the GAN learning problem as a potential field of charged particles, where generated samples are attracted to training set samples but repel each other.  ...  Ultimately, the Coulomb GAN aims to make the potential Φ zero everywhere via the field E(a), which is the negative gradient of Φ.  ... 
arXiv:1708.08819v3 fatcat:q7fwyayipnb3ro4lcrkjwab5ta

Smoothness and Stability in GANs [article]

Casey Chu, Kentaro Minami, Kenji Fukumizu
2020 arXiv   pre-print
Using tools from convex analysis, optimal transport, and reproducing kernels, we construct a GAN that fulfills these conditions simultaneously.  ...  Generative adversarial networks, or GANs, commonly display unstable behavior during training.  ...  Coulomb GANs: Provably optimal Nash equilibria via potential fields. In International Conference on Learning Representations, 2018. Xingyu Zhou.  ... 
arXiv:2002.04185v1 fatcat:u6haramkrfbbfd27ihom2kix7e

Improving MMD-GAN Training with Repulsive Loss Function

Wang Wei, Sun Yuan, Halgamuge Saman
2018 Zenodo  
This study revisits MMD-GAN that uses the maximum mean discrepancy (MMD) as the loss function for GAN and makes two contributions.  ...  Second, inspired by the hinge loss, we propose a bounded Gaussian kernel to stabilize the training of MMD-GAN with the repulsive loss function.  ...  Coulomb GANs: Provably optimal Nash equilibria via potential fields. In ICLR, 2018. Laurens van der Maaten. Accelerating t-SNE using tree-based algorithms. J. Mach. Learn.  ... 
doi:10.5281/zenodo.2557014 fatcat:zouysx24kbb4fbqi4gez65x3hm

Improving MMD-GAN Training with Repulsive Loss Function [article]

Wei Wang, Yuan Sun, Saman Halgamuge
2019 arXiv   pre-print
This study revisits MMD-GAN that uses the maximum mean discrepancy (MMD) as the loss function for GAN and makes two contributions.  ...  Second, inspired by the hinge loss, we propose a bounded Gaussian kernel to stabilize the training of MMD-GAN with the repulsive loss function.  ...  Coulomb GANs: Provably optimal Nash equilibria via potential fields. In ICLR, 2018.Laurens van der Maaten. Accelerating t-SNE using tree-based algorithms. J. Mach. Learn. Res.  ... 
arXiv:1812.09916v4 fatcat:xln6zen5ujco3c2vsbxoynbz3i

A Large-Scale Study on Regularization and Normalization in GANs [article]

Karol Kurach, Mario Lucic, Xiaohua Zhai, Marcin Michalski, Sylvain Gelly
2019 arXiv   pre-print
In this work we take a sober view of the current state of GANs from a practical perspective.  ...  Generative adversarial networks (GANs) are a class of deep generative models which aim to learn a target distribution in an unsupervised fashion.  ...  Coulomb GANs: Provably Optimal Nash Equilibria via Potential Fields. In International Conference on Learning Representations, 2018. Yu, F., Zhang, Y., Song, S., Seff, A., and Xiao, J.  ... 
arXiv:1807.04720v3 fatcat:hrfovoechnfg3bfxououq7zj2y

Improving MMD-GAN Training with Repulsive Loss Function

Wei Wang, Yuan Sun, SAMAN HALGAMUGE
2019
This study revisits MMD-GAN that uses the maximum mean discrepancy (MMD) as the loss function for GAN and makes two contributions.  ...  The code is available at https://github.com/richardwth/MMD-GAN.  ...  Coulomb GANs: Provably optimal Nash equilibria via potential fields. In ICLR, 2018.Laurens van der Maaten. Accelerating t-SNE using tree-based algorithms. J. Mach. Learn. Res.  ... 
doi:10.26188/5c382ae182a7d fatcat:dg23jap2ijh4ngss3cwxybznnq