Filters








4,236 Hits in 4.5 sec

Cost Ensemble with Gradient Selecting for GANs

Minghui Liu, Jiali Deng, Meiyi Yang, Xuan Cheng, Nianbo Liu, Ming Liu, Xiaomin Wang
2022 Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence   unpublished
In contrast, multiple discriminators with different cost functions can yield various gradients for the generator, which indicates we can use them to search for more transportation maps in the latent space  ...  Thus, a gradient selecting mechanism is also proposed to pick up proper gradients.  ...  CTGAN, among them, is Algorithm 1 The gradient selecting mechanism for CES-GAN in each training iteration Input: The prior noise input to the generator follows the random Gaussian distribution with 128  ... 
doi:10.24963/ijcai.2022/164 fatcat:j4ti2t6rivcdjkb33qrtymygs4

Analyzing the Components of Distributed Coevolutionary GAN Training [article]

Jamal Toutouh, Erik Hemberg, Una-May O'Reilly
2020 arXiv   pre-print
Distributed coevolutionary Generative Adversarial Network (GAN) training has empirically shown success in overcoming GAN training pathologies.  ...  In addition, migrating solutions without applying selection in the sub-populations achieves competitive results, while selection without communication between cells reduces performance.  ...  d with gradient 14: for g, d ∈ g × d do Evaluate all updated GAN pairs 15 : L g,d ← evaluate(g, d, B) Evaluate GAN 16: Lg ← average(L ·,d ) Lipizzaner Ablations Analyzed We conduct an ablation analysis  ... 
arXiv:2008.01124v1 fatcat:gnbjzilrp5bwvcq6bqq7d4yifm

Data Dieting in GAN Training [article]

Jamal Toutouh, Una-May O'Reilly, Erik Hemberg
2020 arXiv   pre-print
We investigate training Generative Adversarial Networks, GANs, with less data.  ...  In addition to considering stand-alone GAN training and ensembles of generator models, we also consider reduced data training on an evolutionary GAN training framework named Redux-Lipizzaner.  ...  pairs 3: for g, d ∈ g × d do Evaluate all GAN pairs 4: L g, d ← evaluate(g, d, B) Evaluate GAN 5: end for 6: g, d ← select(n, τ) Tournament selection with minimum loss(L) as fitness 7: for  ... 
arXiv:2004.04642v1 fatcat:46sccshdrfgarebf77twxq2yvm

Progressive-Growing of Generative Adversarial Networks for Metasurface Optimization [article]

Fufang Wen, Jiaqi Jiang, Jonathan A. Fan
2019 arXiv   pre-print
Our results indicate that with this training methodology, the best generated devices have performances that compare well with the best devices produced by gradient-based topology optimization, thereby  ...  eliminating the need for additional design refinement.  ...  We employ Wasserstein loss with a gradient penalty with lambda of 10 for the discriminator loss 31 . A.  ... 
arXiv:1911.13029v2 fatcat:cfghwdwvjjf4pimv5cgdjpn4he

Proposing a novel Cascade Ensemble Super Resolution Generative Adversarial Network (CESR-GAN) method for the reconstruction of super-resolution skin lesion images

Ali Shahsavari, Sima Ranjbari, Toktam Khatibi
2021 Informatics in Medicine Unlocked  
Conclusion: The CESR-GANs method can be used to generate super resolution skin images of skin lesions with highly notable performances.  ...  Materials and methods: In this paper, a novel Cascade Ensemble Super Resolution Generative Adversarial Network (CESR-GAN) method is proposed to reconstruct super-resolution skin lesion images using low-resolution  ...  One of the issues with a GAN-based model is its high computational cost. Therefore, to address this limitation, we have constructed an ensemble of a few networks with a small number of iterations.  ... 
doi:10.1016/j.imu.2021.100628 fatcat:3hzhozogrzhthlqbecpgizd3ia

PATE-AAE: Incorporating Adversarial Autoencoder into Private Aggregation of Teacher Ensembles for Spoken Command Classification [article]

Chao-Han Huck Yang, Sabato Marco Siniscalchi, Chin-Hui Lee
2021 arXiv   pre-print
We propose using an adversarial autoencoder (AAE) to replace generative adversarial network (GAN) in the private aggregation of teacher ensembles (PATE), a solution for ensuring differential privacy in  ...  solutions, namely PATE-GAN and DP-GAN, while maintaining a strong level of privacy target at ε=0.01 with a fixed δ=10^-5.  ...  Both prediction outputs and the trained student model's internal parameters are free from querying requests, which allows the privacy cost only associated with acquiring the training data for the student  ... 
arXiv:2104.01271v2 fatcat:jhcq47sqe5dzveorlvwhbjbfs4

Adversarial-Based Knowledge Distillation for Multi-Model Ensemble and Noisy Data Refinement [article]

Zhiqiang Shen and Zhankui He and Wanyun Cui and Jiahui Yu and Yutong Zheng and Chenchen Zhu and Marios Savvides
2019 arXiv   pre-print
In this paper, we focus on tackling these challenges accompanying with two different image recognition problems: multi-model ensemble and noisy data recognition with a unified framework.  ...  In this paper, we present a method for compressing large, complex trained ensembles into a single network, where the knowledge from a variety of trained deep neural networks (DNNs) is distilled and transferred  ...  Huang et al. [10] employ models at different local minimum for ensembling, which enables no additional training cost, but the computational FLOPs at test time linearly increase with more ensembles.  ... 
arXiv:1908.08520v1 fatcat:amvxtn7wuva3jpjr2fn6brluoq

Adversarial Semantic Alignment for Improved Image Captions [article]

Pierre L. Dognin, Igor Melnyk, Youssef Mroueh, Jarret Ross, Tom Sercu
2019 arXiv   pre-print
The OOC set, combined with our semantic score, are the proposed new diagnosis tools for the captioning community.  ...  discrete GAN training.  ...  We would like to thank Chetan Mishra for his help in getting ground truth captions for our own OOC dataset using Amazon MTurk.  ... 
arXiv:1805.00063v3 fatcat:s2jlvhismjhevh7seyzw4ayoci

Stochastic Deconvolutional Neural Network Ensemble Training on Generative Pseudo-Adversarial Networks [article]

Alexey Chaplygin, Joshua Chacksfield
2018 arXiv   pre-print
Stochastic ensembling is suggested as a method for improving the stability while training GANs.  ...  To train GANs a careful selection of the architecture that is used along with a variety of other methods to improve training.  ...  This results with the loss of the generator being effectively infinite. This results in undefined gradients and it being impossible for the training to progress further.  ... 
arXiv:1802.02436v1 fatcat:xnb6mntpzrhwhax67d6uayg6mu

Signal propagation in a gradient-based and evolutionary learning system

Jamal Toutouh, Una-May O'Reilly
2021 Proceedings of the Genetic and Evolutionary Computation Conference  
Generative adversarial networks (GANs) exhibit training pathologies that can lead to convergence-related degenerative behaviors, whereas spatially-distributed, coevolutionary algorithms (CEAs) for GAN  ...  Thus, Lipi-Ring offers an alternative to Lipizzaner when the computational cost of training matters.  ...  Gradient-based learning is used to update the weights of the networks while evolutionary selection and variation are used for hyperparameter learning [1, 24] .  ... 
doi:10.1145/3449639.3459319 fatcat:fhn7wgfkrbfvjewcprfgw3xlkm

Stochastic Super-Resolution for Downscaling Time-Evolving Atmospheric Fields With a Generative Adversarial Network

Jussi Leinonen, Daniele Nerini, Alexis Berne
2020 IEEE Transactions on Geoscience and Remote Sensing  
The ability of conditional GANs to generate an ensemble of solutions for a given input lends itself naturally to stochastic downscaling, but the stochastic nature of GANs is not usually considered in super-resolution  ...  Here, we introduce a recurrent, stochastic super-resolution GAN that can generate ensembles of time-evolving high-resolution atmospheric fields for an input consisting of a low-resolution sequence of images  ...  ACKNOWLEDGMENT The authors would like to thank MeteoSwiss for providing the MCH-RZC data and Dr. D. Wolfensberger for assisting us with using it.  ... 
doi:10.1109/tgrs.2020.3032790 fatcat:chndcvj47bfbrdin3dkje6hkba

AdvGAN++: Harnessing Latent Layers for Adversary Generation

Surgan Jandial, Puneet Mangla, Sakshi Varshney, Vineeth Balasubramanian
2019 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW)  
Recently proposed AdvGAN, a GAN based approach, takes input image as a prior for generating adversaries to target a model.  ...  In this work, we show how latent features can serve as better priors than input images for adversary generation by proposing AdvGAN++, a version of AdvGAN that achieves higher attack rates than AdvGAN  ...  Gradient based attack methods like Fast Gradient Sign Method (FGSM) obtains an optimal max-norm constrained * Authors contributed equally perturbation of η = sign( x J(θ, x, y)) (1) where J is the cost  ... 
doi:10.1109/iccvw.2019.00257 dblp:conf/iccvw/JandialMVB19 fatcat:c4advuefhbbtzp5t4jfeieaouy

AdvGAN++ : Harnessing latent layers for adversary generation [article]

Puneet Mangla, Surgan Jandial, Sakshi Varshney, Vineeth N Balasubramanian
2019 arXiv   pre-print
Recently proposed AdvGAN, a GAN based approach, takes input image as a prior for generating adversaries to target a model.  ...  In this work, we show how latent features can serve as better priors than input images for adversary generation by proposing AdvGAN++, a version of AdvGAN that achieves higher attack rates than AdvGAN  ...  Gradient based attack methods like Fast Gradient Sign Method (FGSM) obtains an optimal max-norm constrained * Authors contributed equally perturbation of η = sign( x J(θ, x, y)) (1) where J is the cost  ... 
arXiv:1908.00706v2 fatcat:prlllert25b4zfoutqsgbyltm4

Unrolled Generative Adversarial Networks [article]

Luke Metz, Ben Poole, David Pfau, Jascha Sohl-Dickstein
2017 arXiv   pre-print
We introduce a method to stabilize Generative Adversarial Networks (GANs) by defining the generator objective with respect to an unrolled optimization of the discriminator.  ...  We show how this technique solves the common problem of mode collapse, stabilizes training of GANs with complex recurrent generators, and increases diversity and coverage of the data distribution by the  ...  We looked at 3 configurations: without stop gradients; vanilla unrolled GAN, with stop gradients; and with stop gradients but taking the average over the k unrolling steps instead of taking the final value  ... 
arXiv:1611.02163v4 fatcat:likkc2vogjghvpgfleg76ciafe

Signal Propagation in a Gradient-Based and Evolutionary Learning System [article]

Jamal Toutouh, Una-May O'Reilly
2021 arXiv   pre-print
Generative adversarial networks (GANs) exhibit training pathologies that can lead to convergence-related degenerative behaviors, whereas spatially-distributed, coevolutionary algorithms (CEAs) for GAN  ...  Thus, Lipi-Ring offers an alternative to Lipizzaner when the computational cost of training matters.  ...  8: return ( , ) * ⊲ Cell with best generator mixture Algorithm 2 3: L , ← evaluate( , , ) ⊲ Evaluate GAN 4: , ← select( , ) ⊲ Tournament selection 5: for ∈ do ⊲ Loop over the batches  ... 
arXiv:2102.08929v1 fatcat:m3pm4hvq7ngyrhdbbmp4ajrhkq
« Previous Showing results 1 — 15 out of 4,236 results