Filters








189 Hits in 4.4 sec

Dual Mixup Regularized Learning for Adversarial Domain Adaptation [article]

Yuan Wu, Diana Inkpen, Ahmed El-Roby
2020 arXiv   pre-print
but also enriches the intrinsic structures of the latent space.  ...  However, there are two issues with the existing methods.  ...  Generate to Adapt (GTA) provides an adversarial image generation approach that directly learns a joint feature space in which the distance between the source and target domains can be minimized [21] .  ... 
arXiv:2007.03141v2 fatcat:fdmh32xdijerdiopgsbls3qvqi

Learning Flat Latent Manifolds with VAEs [article]

Nutan Chen, Alexej Klushyn, Francesco Ferroni, Justin Bayer, Patrick van der Smagt
2020 arXiv   pre-print
is estimated in a more compact latent space.  ...  This is achieved by defining the latent space as a Riemannian manifold and by regularising the metric tensor to be a scaled identity matrix.  ...  We extend mixup to the VAE framework (unsupervised learning) by applying it to encoded data in the latent space of generative models.  ... 
arXiv:2002.04881v3 fatcat:2ctes6k35rft3fc3m3775wz5nm

On-manifold Adversarial Data Augmentation Improves Uncertainty Calibration [article]

Kanil Patel, William Beluch, Dan Zhang, Michael Pfeiffer, Bin Yang
2021 arXiv   pre-print
adversarial attack path in the latent space of an autoencoder-based generative model that closely approximates decision boundaries between two or more classes.  ...  Variants of OMADA can employ different sampling schemes for ambiguous on-manifold examples based on the entropy of their estimated soft labels, which exhibit specific strengths for generalization, calibration  ...  The proposed OMADA method trains an autoencoder based generative model to approximate the data manifold and uses the adversarial attack in latent space to create ambiguous samples with soft labels.  ... 
arXiv:1912.07458v5 fatcat:dnebtyxpovc3jmlvc5h3esciey

PointMixup: Augmentation for Point Clouds [article]

Yunlu Chen, Vincent Tao Hu, Efstratios Gavves, Thomas Mensink, Pascal Mettes, Pengwan Yang, Cees G.M. Snoek
2020 arXiv   pre-print
With the definition of interpolation, PointMixup allows to introduce strong interpolation-based regularizers such as mixup and manifold mixup to the point cloud domain.  ...  To that end, we introduce PointMixup, an interpolation method that generates new examples through an optimal assignment of the path function between two point clouds.  ...  Mix input from same class - 86.4 Mixup latent (layer 1) - 86.9 Mixup latent (layer 2) - 86.8 Label smoothing (0.1) 87.2 - Label smoothing (0.2) 87.3 - Transforms PointMixup × w/o MM w/  ... 
arXiv:2008.06374v1 fatcat:73nwb5vqazet3b2k2a5jlxastm

Fair Mixup: Fairness via Interpolation [article]

Ching-Yao Chuang, Youssef Mroueh
2021 arXiv   pre-print
We use mixup, a powerful data augmentation strategy to generate these interpolates.  ...  To improve the generalizability of fair classifiers, we propose fair mixup, a new data augmentation strategy for imposing the fairness constraint.  ...  Verma et al. (2019) propose Manifold Mixup, which generate the mixup samples in a latent space.  ... 
arXiv:2103.06503v1 fatcat:upihwyscqna6dkamy2yy7atssq

Enhancing Mixup-based Semi-Supervised Learning with Explicit Lipschitz Regularization [article]

Prashnna Kumar Gyawali, Sandesh Ghimire, Linwei Wang
2020 arXiv   pre-print
In this paper, we offer a theoretically substantiated proposition that mixup improves the smoothness of the neural function by bounding the Lipschitz constant of the gradient function of the neural networks  ...  A successful example is the adoption of mixup strategy in SSL that enforces the global smoothness of the neural function by encouraging it to behave linearly when interpolating between training examples  ...  Different variants of mixup have since been presented in the literature, such as mixing in both data and latent space for further improving generalization in supervised learning [9] and SSL [10] .  ... 
arXiv:2009.11416v1 fatcat:dynhstbqw5d7rnsxbrlcoje7yq

AlignMixup: Improving Representations By Interpolating Aligned Features [article]

Shashanka Venkataramanan, Ewa Kijak, Laurent Amsaleg, Yannis Avrithis
2022 arXiv   pre-print
Mixup is a powerful data augmentation method that interpolates between two or more examples in the input or feature space and between the corresponding target labels.  ...  More than that, we show that an autoencoder can still improve representation learning under mixup, without the classifier ever seeing decoded images.  ...  Our work is a compromise between a "good" handcrafted interpolation in the image space and a fully learned one in the latent space.  ... 
arXiv:2103.15375v2 fatcat:mm7r4hqvdfbjdmcpjjdla2z3ym

Interpolated Joint Space Adversarial Training for Robust and Generalizable Defenses [article]

Chun Pong Lau, Jiang Liu, Hossein Souri, Wei-An Lin, Soheil Feizi, Rama Chellappa
2021 arXiv   pre-print
The mixup strategy improves the standard accuracy of neural networks but sacrifices robustness when combined with AT.  ...  However, models trained with AT sacrifice standard accuracy and do not generalize well to novel attacks.  ...  Acknowledgement This work was supported by the DARPA GARD Program HR001119S0026-GARD-FP-052.  ... 
arXiv:2112.06323v1 fatcat:zclgklsqxrbo7fnuj6c662k4ka

SHOT-VAE: Semi-supervised Deep Generative Models With Label-aware ELBO Approximations [article]

Hao-Zhe Feng, Kezhi Kong, Minghao Chen, Tianye Zhang, Minfeng Zhu, Wei Chen
2020 arXiv   pre-print
The SHOT-VAE offers two contributions: (1) A new ELBO approximation named smooth-ELBO that integrates the label predictive loss into ELBO. (2) An approximation based on optimal interpolation that breaks  ...  The SHOT-VAE achieves good performance with a 25.30% error rate on CIFAR-100 with 10k labels and reduces the error rate to 6.11% on CIFAR-10 with 4k labels.  ...  To create the pseudo distributionp(y|X) ofX, it is a natural thought that the optimal interpolation in data space could associate with the same in latent space with D KL distance used in ELBO.  ... 
arXiv:2011.10684v4 fatcat:itsrhv5mfzdt5fzrp553vrxrdq

Jigsaw-VAE: Towards Balancing Features in Variational Autoencoders [article]

Saeid Asgari Taghanaki, Mohammad Havaei, Alex Lamb, Aditya Sanghi, Ara Danielyan, Tonya Custis
2020 arXiv   pre-print
Similarly, latent variables trained with imbalanced features induce the VAE to generate less diverse (i.e. biased towards dominant features) samples.  ...  There is a growing interest in the question of whether features learned on one environment will generalize across different environments.  ...  Besides measuring reconstruction error, sampling quality and diversity, and how realistic the generated images are, we proposed to measure the presence of train data features in the generated images using  ... 
arXiv:2005.05496v1 fatcat:lhbhuqkt25ahbcp5d5ik2nh2wq

Class-Similarity Based Label Smoothing for Confidence Calibration [article]

Chihuang Liu, Joseph JaJa
2021 arXiv   pre-print
This motivates the development of a new smooth label where the label values are based on similarities with the reference class.  ...  Generating confidence calibrated outputs is of utmost importance for the applications of deep neural networks in safety-critical decision-making systems.  ...  distances and the latent encoding distance.  ... 
arXiv:2006.14028v2 fatcat:hijs4z2gyzczlap2jeclxkk3p4

Synthesize-It-Classifier: Learning a Generative Classifier through RecurrentSelf-analysis [article]

Arghya Pal, Rapha Phan, KokSheik Wong
2021 arXiv   pre-print
a mixup of classes) improves class interpolation.  ...  The overall methodology, called Synthesize-It-Classifier (STIC), does not require an explicit generator network to estimate the density of the data distribution and sample images from that, but instead  ...  However, the FID calculates the distance between feature vectors of real and generated images.  ... 
arXiv:2103.14212v1 fatcat:2owummubyrgynfckaqxij3dxqy

Graph Data Augmentation for Graph Machine Learning: A Survey [article]

Tong Zhao, Gang Liu, Stephan Günnemann, Meng Jiang
2022 arXiv   pre-print
Data augmentation has recently seen increased interest in graph machine learning given its ability of creating extra training data and improving model generalization.  ...  We first categorize graph data augmentation operations based on the components of graph data they modify or create.  ...  On the other hand, ifMixup [Guo and Mao, 2021] directly applies Mixup on the graph data instead of the latent space.  ... 
arXiv:2202.08871v1 fatcat:gjf7mgihkfbqdg6cqscflcw6ga

Meta Dropout: Learning to Perturb Features for Generalization [article]

Hae Beom Lee, Taewook Nam, Eunho Yang, Sung Ju Hwang
2022 arXiv   pre-print
To tackle this challenge, we propose a novel regularization method, meta-dropout, which learns to perturb the latent features of training examples for generalization in a meta-learning framework.  ...  Then, the learned noise generator can perturb the training examples of unseen tasks at the meta-test time for improved generalization.  ...  LEARNING TO PERTURB LATENT FEATURES We now describe our problem setting and the meta-learning framework for learning to perturb training instances in the latent feature space, for improved generalization  ... 
arXiv:1905.12914v3 fatcat:chp6vlxhjrcrnhs6tximmqzns4

Epsilon Consistent Mixup: Structural Regularization with an Adaptive Consistency-Interpolation Tradeoff [article]

Vincent Pisztora, Yanglan Ou, Xiaolei Huang, Francesca Chiaromonte, Jia Li
2021 arXiv   pre-print
In this paper we propose ϵ-Consistent Mixup (ϵmu). ϵmu is a data-based structural regularization technique that combines Mixup's linear interpolation with consistency regularization in the Mixup direction  ...  This learnable combination of consistency and interpolation induces a more flexible structure on the evolution of the response across the feature space and is shown to improve semi-supervised classification  ...  space than Mixup.  ... 
arXiv:2104.09452v2 fatcat:dwrh2jvt2bc47pz6zhkkzopjme
« Previous Showing results 1 — 15 out of 189 results