A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Filters
On the Sample Complexity of Adversarial Multi-Source PAC Learning
[article]
2020
arXiv
pre-print
In this work we show that, surprisingly, the same is not true in the multi-source setting, where the adversary can arbitrarily corrupt a fixed fraction of the data sources. ...
It is known that in the single-source case, an adversary with the power to corrupt a fixed fraction of the training data can prevent PAC-learnability, that is, even in the limit of infinitely much training ...
This research was supported by the Sci-entific Service Units (SSU) of IST Austria through resources provided by Scientific Computing (SciComp). ...
arXiv:2002.10384v2
fatcat:sd4daitwevhnxmcgyra3kravhu
Improving Diversity with Adversarially Learned Transformations for Domain Generalization
[article]
2022
arXiv
pre-print
To be successful in single source domain generalization, maximizing diversity of synthesized domains has emerged as one of the most effective strategies. ...
To address this issue, we present a novel framework that uses adversarially learned transformations (ALT) using a neural network to model plausible, yet hard image transformations that fool the classifier ...
Acknowledgments and Disclosure of Funding This work was performed under the auspices of the U.S. Department of Energy by the Lawrence Livermore National Laboratory under Contract No. ...
arXiv:2206.07736v1
fatcat:u5rjgsy4ovdhdfc6reqvtpmxfe
Adversarially Adaptive Normalization for Single Domain Generalization
[article]
2021
arXiv
pre-print
on the target domain with large discrepancy from the source domain. ...
Existing works focus on studying the adversarial domain augmentation (ADA) to improve the model's generalization capability. ...
Table 4 4 shows the results on PACS for the multi-source domain setting, where we do not utilize domain labels during training. ...
arXiv:2106.01899v1
fatcat:7o4bilfpnvh6pfrqcw6mbuuxcm
Multi-Task Generative Adversarial Nets with Shared Memory for Cross-Domain Coordination Control
[article]
2018
arXiv
pre-print
This paper proposes the multi-task generative adversarial nets with shared memory for cross-domain coordination control, which can generate sequential decision policy directly from raw sensory input of ...
Results on three groups of discrete-time nonlinear control tasks show that our proposed model can availably improve the performance of task with the help of other related tasks. ...
ACKNOWLEDGMENT The authors also would like to thank anonymous editor and reviewers who gave valuable suggestion that has helped to improve the quality of the manuscript. ...
arXiv:1807.00298v1
fatcat:we2c24rbnrfkbeanpjdqihxljq
Adversarial Branch Architecture Search for Unsupervised Domain Adaptation
[article]
2021
arXiv
pre-print
To the best of our knowledge, no prior work has addressed these aspects in the context of NAS for UDA. ...
This dependency on handcrafted designs limits the applicability of a given approach in time, as old methods need to be constantly adapted to novel backbones. ...
Acknowledgments We acknowledge the CINECA award under the ISCRA initiative, for the availability of high performance computing resources and support. ...
arXiv:2102.06679v3
fatcat:drpyhfm5ofaypohhfu53ggtlza
Towards Practical Robustness Analysis for DNNs based on PAC-Model Learning
[article]
2022
arXiv
pre-print
The innovation of our work is the integration of model learning into PAC robustness analysis: that is, we construct a PAC guarantee on the model level instead of sample distribution, which induces a more ...
Based on black-box model learning with scenario optimisation, we abstract the local behaviour of a DNN via an affine model with the probably approximately correct (PAC) guarantee. ...
We use samples to learn a relatively simple model of the DNN with the PAC guarantee via scenario optimisation and gain more insights to the analysis of adversarial robustness. ...
arXiv:2101.10102v2
fatcat:uqglb3i7vbfibmejskta3ngcbq
Content Preserving Image Translation with Texture Co-occurrence and Spatial Self-Similarity for Texture Debiasing and Domain Adaptation
[article]
2022
arXiv
pre-print
Models trained on datasets with texture bias usually perform poorly on out-of-distribution samples since biased representations are embedded into the model. ...
In this paper, we propose a novel framework that leverages image translation to generate additional training images using the content of a source image and the texture of a target image with a different ...
In general, prior methods only focused on the generalization performance of inaccessible-domain samples, and thus design models to learn common object features from multiple source domains. ...
arXiv:2110.07920v4
fatcat:2qmzsyfbkja3xdacigfdkkvmgq
Domain Consistency Regularization for Unsupervised Multi-source Domain Adaptive Classification
[article]
2021
arXiv
pre-print
Deep learning-based multi-source unsupervised domain adaptation (MUDA) has been actively studied in recent years. ...
CRMA aligns not only the distributions of each pair of source and target domains but also that of all domains. ...
Table 4 : 4 Comparing CRMA with the state-of-the-art on PACS (in classification accuracy %). ...
arXiv:2106.08590v1
fatcat:5rujqvspjjcj3ku4hfr7nbzpje
Rethinking Domain Generalization Baselines
[article]
2021
arXiv
pre-print
Despite being very powerful in standard learning settings, deep learning models can be extremely brittle when deployed in scenarios different from those on which they were trained. ...
This issue open new scenarios for domain generalization research, highlighting the need of novel methods properly able to take advantage of the introduced data variability. ...
However their performance tends to grow together with the complexity of the learning procedure which may involve one or multiple generator modules and adversarial training. ...
arXiv:2101.09060v2
fatcat:a2ob243uvbcbjj2edy7g3xlf3y
Discriminative Adversarial Domain Generalization with Meta-learning based Cross-domain Validation
[article]
2020
arXiv
pre-print
The generalization capability of machine learning models, which refers to generalizing the knowledge for an "unseen" domain via learning from one or multiple seen domain(s), is of great importance to develop ...
representation on multiple "seen" domains, and (ii) meta-learning based cross-domain validation, which simulates train/test domain shift via applying meta-learning techniques in the training process. ...
The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation thereon. 1 ...
arXiv:2011.00444v1
fatcat:odyqjjadrbdwvd46qbel4itoxa
Do Outliers Ruin Collaboration?
[article]
2018
arXiv
pre-print
The overhead is defined as the ratio between the sample complexity of learning in this setting and that of learning the same hypothesis class on a single data distribution. ...
We consider the problem of learning a binary classifier from n different data sources, among which at most an η fraction are adversarial. ...
Unfortunately, Zuckerman [13] proved that even if the graph is known to contain a hidden clique of size Ω(n) 5 , it is still NP-hard to find a clique of size Ω(n 1−β ) for any β < 1. ...
arXiv:1805.04720v1
fatcat:sye5spgs4nh6ne27vhj3rm3goy
Unsupervised Robust Domain Adaptation without Source Data
[article]
2021
arXiv
pre-print
The proposed method of using non-robust pseudo-labels performs surprisingly well on both clean and adversarial samples, for the task of image classification. ...
We study the problem of robust domain adaptation in the context of unavailable target labels and source data. The considered robustness is against adversarial perturbations. ...
In future, we will explore single source models that perform both robust and non-robust predictions, in a multi-tasking fashion. This will avoid sharing two models trained on the the source data. ...
arXiv:2103.14577v1
fatcat:oegfplkkqbf3dnq7lwtqv6djxa
Realizable Learning is All You Need
[article]
2021
arXiv
pre-print
proofs of the equivalence tend to be disparate, and rely on strong model-specific assumptions like uniform convergence and sample compression. ...
With variants ranging from classical settings like PAC learning and regression to recent trends such as adversarially robust and private learning, it's surprising that we still lack a unified theory; traditional ...
Acknowledgements The authors would like to thank Shay Moran, Russell Impagliazzo, and Omar Montasser for enlightening discussions. ...
arXiv:2111.04746v1
fatcat:h3kx6pf6azeyfcd5yqn3cqtijq
Compound Domain Generalization via Meta-Knowledge Encoding
[article]
2022
arXiv
pre-print
Mainstream DG methods typically assume that the domain label of each source sample is known a priori, which is challenged to be satisfied in many real-world applications. ...
Firstly, we introduce Style-induced Domain-specific Normalization (SDNorm) to re-normalize the multi-modal underlying distributions, thereby dividing the mixture of source domains into latent clusters. ...
adversarial learning [35, 40] . ...
arXiv:2203.13006v1
fatcat:tcoojublnbfz3naolobpc2s2mu
A theoretical framework for deep transfer learning
2016
Information and Inference A Journal of the IMA
We generalize the notion of PAC learning to include transfer learning. ...
In our framework, the linkage between the source and the target tasks is a result of having the sample distribution of all classes drawn from the same distribution of distributions, and by restricting ...
Acknowledgments We would like to thank Tomaso Poggio, Yishay Mansour and Ronitt Rubinfeld for illuminating discussions during the preparation of this paper. ...
doi:10.1093/imaiai/iaw008
fatcat:dardamm4fndr5mu3qzz544deue
« Previous
Showing results 1 — 15 out of 1,029 results