A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Filters
Robustness of Conditional GANs to Noisy Labels
[article]
2018
arXiv
pre-print
We study the problem of learning conditional generators from noisy labeled samples, where the labels are corrupted by random noise. ...
When the distribution of the noise is known, we introduce a novel architecture which we call Robust Conditional GAN (RCGAN). ...
We introduce two architectures to train conditional GANs with noisy samples. First, when we have the knowledge of the confusion matrix C, we propose RCGAN (Robust Conditional GAN) in Section 2. ...
arXiv:1811.03205v1
fatcat:yl2j5bjxnvbwdkemqqyrnm5tnm
Label-Noise Robust Generative Adversarial Networks
[article]
2019
arXiv
pre-print
To remedy this, we propose a novel family of GANs called label-noise robust GANs (rGANs), which, by incorporating a noise transition model, can learn a clean label conditional generative distribution even ...
when training labels are noisy. ...
This work was supported by JSPS KAKENHI Grant Number JP17H06100, partially supported by JST CREST Grant Number JPMJCR1403, Japan, and partially supported by the Ministry of Education, Culture, Sports, ...
arXiv:1811.11165v2
fatcat:6ybdr2ar2vap5cvr42qjibbkoq
Generative Pseudo-label Refinement for Unsupervised Domain Adaptation
2020
2020 IEEE Winter Conference on Applications of Computer Vision (WACV)
We show that cGANs are, to some extent, robust against such "shift noise". Indeed, cGANs trained with noisy pseudo-labels, are able to filter such noise and generate cleaner target samples. ...
We investigate and characterize the inherent resilience of conditional Generative Adversarial Networks (cGANs) against noise in their conditioning labels, and exploit this fact in the context of Unsupervised ...
However, although cGANs are to some extent resistant to noisy-labels, they are not robust enough to generate target samples that allow to train competitive models. ...
doi:10.1109/wacv45572.2020.9093579
dblp:conf/wacv/MorerioVRM20
fatcat:ttq4wc3wfbgavkljtmolxahv7e
Generative Pseudo-label Refinement for Unsupervised Domain Adaptation
[article]
2020
arXiv
pre-print
We show that cGANs are, to some extent, robust against such "shift noise". Indeed, cGANs trained with noisy pseudo-labels, are able to filter such noise and generate cleaner target samples. ...
We investigate and characterize the inherent resilience of conditional Generative Adversarial Networks (cGANs) against noise in their conditioning labels, and exploit this fact in the context of Unsupervised ...
However, although cGANs are to some extent resistant to noisy-labels, they are not robust enough to generate target samples that allow to train competitive models. ...
arXiv:2001.02950v1
fatcat:nqo5q4jv2bblhojbtcw7purhdi
Label-Noise Robust Multi-Domain Image-to-Image Translation
[article]
2019
arXiv
pre-print
To overcome this limitation, we propose a novel model called the label-noise robust image-to-image translation model (RMIT) that can learn a clean label conditional generator even when noisy labeled data ...
In particular, we propose a novel loss called the virtual cycle consistency loss that is able to regularize cyclic reconstruction independently of noisy labeled data, as well as we introduce advanced techniques ...
Acknowledgement This work was partially supported by JST CREST Grant Number JPMJCR1403, Japan, and partially supported by the Ministry of Education, Culture, Sports, Science and Technology (MEXT) as "Seminal ...
arXiv:1905.02185v1
fatcat:vbyukorjyrhanmfnm5usfilkxy
Data Augmentation Methods for End-to-end Speech Recognition on Distant-Talk Scenarios
[article]
2021
arXiv
pre-print
to map two different audio characteristics, the one of clean speech and of noisy recordings, to match the testing condition, and 3) pseudo-label augmentation provided by the pretrained ASR module for ...
E2E ASR models are trained on the series of CHiME challenge datasets, which are suitable tasks for studying robustness against noisy and spontaneous speech. ...
Therefore, instead, we use Cycle-GAN to map the clean training data to the speech-enhanced noisy testing speeches. ...
arXiv:2106.03419v1
fatcat:w4x6uf2k6rcchdr3rpowxjss7q
Preparation of Papers for IEEE ACCESS
2020
IEEE Access
However, for some scenarios, if there exist several labels, both our and conditional GAN can be used together to achieve the controllability on clean labels and the robustness on noisy labels, possibly ...
FIGURE 4 : 4 Comparison of images generated by (top row) conditional GAN and (bottom row) joint GAN on CIFAR10 with noisy labels. ...
doi:10.1109/access.2020.3031292
fatcat:dwrgb6e6wbgezk3ahgdmd4o4le
Invariant Representations for Noisy Speech Recognition
[article]
2016
arXiv
pre-print
Ensuring such robustness to variability is a challenge in modern day neural network-based ASR systems, especially when all types of variability are not seen during training. ...
Modern automatic speech recognition (ASR) systems need to be robust under acoustic variability arising from environmental, speaker, channel, and recording conditions. ...
Acknowledgments We would like to thank Yaroslav Ganin, David Warde-Farley for insightful discussions, developers of Theano Theano Development Team (2016) ...
arXiv:1612.01928v1
fatcat:fg2lggapbbfdzjftym2ujwl7la
Multi-Task Multi-Network Joint-Learning of Deep Residual Networks and Cycle-Consistency Generative Adversarial Networks for Robust Speech Recognition
2019
Interspeech 2019
Despite the fast developments of GANs for computer visions, only regular GANs have been adopted for robust ASR. ...
In this work, we adopt a more advanced cycleconsistency GAN (CycleGAN) to address the training failure problem due to mode collapse of regular GANs. ...
Alternately, GANs can also be used to map both noisy and clean features into a common feature domain for robust ASR as done in [20] . ...
doi:10.21437/interspeech.2019-2078
dblp:conf/interspeech/ZhaoNTM19
fatcat:3bfy4hfeybgrtfqjo5buk4f43q
Comparison of Unsupervised Modulation Filter Learning Methods for ASR
2018
Interspeech 2018
The experimental results obtained from the modulation filtered representations shows considerable robustness to noise, channel distortions and reverberant conditions compared to other feature extraction ...
One approach to this problem is to achieve the desired robustness in speech representations used in the ASR. ...
Introduction The robustness of speech recognition systems to noise and reverberations continues to be a challenging task inspite of recent advances in its performance. ...
doi:10.21437/interspeech.2018-1972
dblp:conf/interspeech/AgrawalG18
fatcat:ucpddiv4hfa5rps52pkjah3ncq
PCGAN-CHAR: Progressively Trained Classifier Generative Adversarial Networks for Classification of Noisy Handwritten Bangla Characters
[article]
2019
arXiv
pre-print
Due to the sparsity of features, noise has proven to be a great inhibitor in the classification of handwritten characters. ...
We experimentally demonstrate the effectiveness of our approach by classifying noisy versions of MNIST, handwritten Bangla Numeral, and Basic Character datasets. ...
We used the discriminator in the Classifier GAN for our Classification Network for the noisy handwritten characters, but to make it more robust to noise and resolution we novelly adopted the innovative ...
arXiv:1908.08987v1
fatcat:fcpiai5pcne3lcaujjd2ymzxki
Classify and Generate Reciprocally: Simultaneous Positive-Unlabelled Learning and Conditional Generation with Extra Data
[article]
2020
arXiv
pre-print
the interplay between them: 1) enhancing the performance of PU classifiers with the assistance of a novel Conditional Generative Adversarial Network (CGAN) that is robust to noisy labels, 2) leveraging ...
Our key contribution is a Classifier-Noise-Invariant Conditional GAN (CNI-CGAN) that can learn the clean data distribution from noisy labels predicted by a PU classifier. ...
Robust Conditional GANs [25, 15] were proposed to class-dependent noisy labels. ...
arXiv:2006.07841v1
fatcat:cmr3ufn2bzcs7ctnlvz5x756le
Noise Modeling to Build Training Sets for Robust Speech Enhancement
2022
Applied Sciences
To solve this problem, we propose a new Generative Adversarial Network framework for Noise Modeling (NM-GAN) that creates realistic paired training sets by imitating real noise distribution. ...
NM-GAN generates enough recall (diversity) and precision (noise quality) in its samples through adversarial and alternate training, effectively simulating real noise, which is then utilized to compose ...
Such conditioning could be based on class labels on some parts of the data for inpainting or different modalities. ...
doi:10.3390/app12041905
fatcat:wdqfjzhctnesvnowlxatnbynvm
GANs for learning from very high class conditional noisy labels
[article]
2020
arXiv
pre-print
We use Generative Adversarial Networks (GANs) to design a class conditional label noise (CCN) robust scheme for binary classification. ...
This implies that our schemes perform well due to the adversarial nature of GANs. ...
In this paper, using a small set of clean labels along with noisy labels, we propose a GAN based class conditional label noise robust binary classification framework. ...
arXiv:2010.09577v1
fatcat:yez3uzgzone3xgr7vgzpvyym3e
Noise Robust Generative Adversarial Networks
[article]
2020
arXiv
pre-print
As an alternative, we propose a novel family of GANs called noise robust GANs (NR-GANs), which can learn a clean image generator even when training images are noisy. ...
On three benchmark datasets, we demonstrate the effectiveness of NR-GANs in noise robust image generation. Furthermore, we show the applicability of NR-GANs in image denoising. ...
Robustness of conditional GANs to noisy
labels. In NeurIPS, 2018. 3
[74] Aäron van den Oord, Nal Kalchbrenner, and Koray
Kavukcuoglu. Pixel recurrent neural networks. ...
arXiv:1911.11776v2
fatcat:rfgqwn3wbbebjmaxrq3empt7fu
« Previous
Showing results 1 — 15 out of 4,643 results