A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Filters
On Adversarial Mixup Resynthesis
[article]
2019
arXiv
pre-print
We explore models that are capable of combining the attributes of multiple inputs such that a resynthesised output is trained to fool an adversarial discriminator for real versus synthesised data. ...
Figure 2 : 2 The unsupervised version of adversarial mixup resynthesis (AMR). ...
AE+GAN
= adversarial reconstruction auto-encoder (Equation 2); AMR = adversarial mixup resynthesis (ours);
ACAI = adversarially constrained auto-encoder interpolation (Berthelot* et al., 2019))
Method ...
arXiv:1903.02709v4
fatcat:5cpnsyxp75fnrfynxrayer5j5a
On the benefits of defining vicinal distributions in latent space
[article]
2021
arXiv
pre-print
Our empirical studies on CIFAR-10, CIFAR-100, and Tiny-ImageNet demonstrate that models trained by performing mixup in the latent manifold learned by VAEs are inherently more robust to various input corruptions ...
We propose a new approach - VarMixup (Variational Mixup) - to better sample mixup images by using the latent manifold underlying the data. ...
Adversarial Mixup Resynthesis (Beckham et al., 2019) attempted mixing latent codes used by autoencoders through an arbitrary mixing mechanism that can recombine codes from different inputs to produce ...
arXiv:2003.06566v4
fatcat:c2jsf7bpb5cx3krvac3fxxs5bi
How Does Mixup Help With Robustness and Generalization?
[article]
2021
arXiv
pre-print
Mixup is a popular data augmentation technique based on taking convex combinations of pairs of examples and their labels. ...
For robustness, we show that minimizing the Mixup loss corresponds to approximately minimizing an upper bound of the adversarial loss. ...
Mixup Resynthesis
The derived regularization terms are then used to demonstrate why Mixup has improved generalization and robustness against one-step adversarial examples. ...
arXiv:2010.04819v4
fatcat:ftapdhfseffffci2ztxcq546re
Interpolation Consistency Training for Semi-supervised Learning
2019
Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence
Our experiments show that ICT achieves state-of-the-art performance when applied to standard neural network architectures on the CIFAR-10 and SVHN benchmark dataset. ...
In section 2.3, I describe how adversarial training leads to poor performance on unperturbed samples and the justification and summary of Interpolated Adversarial Training (Publication III), which is a ...
In section 2.2, I briefly describe the working principle of the Auto-Encoders, which is followed by the motivation and summary of Adversarial Mixup Resynthesizer (Publication II). ...
doi:10.24963/ijcai.2019/504
dblp:conf/ijcai/VermaLKBL19
fatcat:bucjagfaybei5boup7b56yqngu
Multi-SpectroGAN: High-Diversity and High-Fidelity Spectrogram Generation with Adversarial Style Combination for Speech Synthesis
[article]
2020
arXiv
pre-print
from text sequences with only adversarial feedback. ...
While generative adversarial networks (GANs) based neural text-to-speech (TTS) systems have shown significant improvement in neural speech synthesis, there is no TTS system to learn to synthesize speech ...
Similar to (Beckham et al. 2019) interpolating the hidden state of the autoencoder for adversarial mixup resynthesis, we use two types of mixing, binary selection between style embeddings, and manifold ...
arXiv:2012.07267v1
fatcat:cms2ugs23jhpvnneq57yjrk2fy