Filters








708 Hits in 5.5 sec

Normalized Wasserstein Distance for Mixture Distributions with Applications in Adversarial Learning and Domain Adaptation [article]

Yogesh Balaji, Rama Chellappa, Soheil Feizi
2019 arXiv   pre-print
We demonstrate the effectiveness of the proposed measure in GANs, domain adaptation and adversarial clustering in several benchmark datasets.  ...  This often leads to undesired results in distance-based learning methods for mixture distributions. In this paper, we resolve this issue by introducing the Normalized Wasserstein measure.  ...  Introduction Quantifying distances between probability distributions is a fundamental problem in machine learning and statistics with several applications in generative models, domain adaptation, clustering  ... 
arXiv:1902.00415v2 fatcat:4vy5aeme6vh2bhcrpn6p75prya

Learning Generative Models across Incomparable Spaces [article]

Charlotte Bunne, David Alvarez-Melis, Andreas Krause, Stefanie Jegelka
2019 arXiv   pre-print
While this framework subsumes current generative models in identically reproducing distributions, its inherent flexibility allows application to tasks in manifold learning, relational learning and cross-domain  ...  Generative Adversarial Networks have shown remarkable success in learning a distribution that faithfully recovers a reference distribution in its entirety.  ...  We thank Suvrit Sra for a question that initiated this research, and MIT Supercloud and the Lincoln Laboratory Supercomputing Center for providing computational resources.  ... 
arXiv:1905.05461v2 fatcat:55bzhkdvm5dincssprlcwecpyq

Robust Optimal Transport with Applications in Generative Modeling and Domain Adaptation [article]

Yogesh Balaji, Rama Chellappa, Soheil Feizi
2020 arXiv   pre-print
Optimal Transport (OT) distances such as Wasserstein have been used in several areas such as GANs and domain adaptation.  ...  We demonstrate the effectiveness of our formulation in two applications of GANs and domain adaptation.  ...  Two recent applications of OT in machine learning include generative modeling and domain adaptation.  ... 
arXiv:2010.05862v1 fatcat:elrc6qm3ufd63ceuu74lr6igdu

Sequential Model Adaptation Using Domain Agnostic Internal Distributions [article]

Mohammad Rostami, Aram Galstyan
2021 arXiv   pre-print
We develop an algorithm for sequential adaptation of a classifier that is trained for a source domain to generalize in an unannotated target domain.  ...  We align the distributions of the source and the target domains in a discriminative embedding space via an intermediate internal distribution.  ...  Quite differently, we rely an internally learned distribution by a base classifier model for the source domain to align the two source and target domain distributions indirectly in an embedding to adapt  ... 
arXiv:2007.00197v4 fatcat:qlv4zgkoira2jglhcxvkwwpn74

Variational Resampling Based Assessment of Deep Neural Networks under Distribution Shift [article]

Xudong Sun, Alexej Gossmann, Yu Wang, Bernd Bischl
2019 arXiv   pre-print
domain adaptation and domain generalization approaches.  ...  Our method of creating artificial domain splits of a single dataset can also be used to establish novel model selection criteria and assessment tools in machine learning, as well as benchmark methods for  ...  Domain Adaptation adapts the source domain distribution to the target domain distribution to improve the performance of a target learner in transfer learning .  ... 
arXiv:1906.02972v6 fatcat:w4n4mb2zubatjocl6lkq5simeu

Improving robustness against common corruptions by covariate shift adaptation [article]

Steffen Schneider, Evgenia Rusak, Luisa Eck, Oliver Bringmann, Wieland Brendel, Matthias Bethge
2020 arXiv   pre-print
We argue that results with adapted statistics should be included whenever reporting scores in corruption benchmarks and other out-of-distribution generalization settings.  ...  The key insight is that in many scenarios, multiple unlabeled examples of the corruptions are available and can be used for unsupervised online adaptation.  ...  Acknowledgments and Disclosure of Funding  ... 
arXiv:2006.16971v2 fatcat:xwtrxkhidnc2dmg75cppkfxgp4

Sliced Wasserstein Distance for Learning Gaussian Mixture Models [article]

Soheil Kolouri, Gustavo K. Rohde, Heiko Hoffmann
2017 arXiv   pre-print
Gaussian mixture models (GMM) are powerful parametric tools with many applications in machine learning and computer vision.  ...  Specifically, we propose minimizing the sliced-Wasserstein distance between the mixture model and the data distribution with respect to the GMM parameters.  ...  estimation in various applications concerning machine learning, computer vision, and signal/image analysis.  ... 
arXiv:1711.05376v2 fatcat:rnl54ztwqjeo3fkkkx6znpgpby

A Review on Generative Adversarial Networks: Algorithms, Theory, and Applications [article]

Jie Gui, Zhenan Sun, Yonggang Wen, Dacheng Tao, Jieping Ye
2020 arXiv   pre-print
Furthermore, GANs have been combined with other machine learning algorithms for specific applications, such as semi-supervised learning, transfer learning, and reinforcement learning.  ...  In this paper, we attempt to provide a review on various GANs methods from the perspectives of algorithms, theory, and applications.  ...  The authors also would like to thank the helpful discussions with group members of Umich Yelab and Foreseer research group.  ... 
arXiv:2001.06937v1 fatcat:4iqb2vnhezgjnphfv3taej7vbu

Unsupervised Domain Adaptation for Retinal Vessel Segmentation with Adversarial Learning and Transfer Normalization [article]

Wei Feng, Lie Ju, Lin Wang, Kaimin Song, Xin Wang, Xin Zhao, Qingyi Tao, Zongyuan Ge
2021 arXiv   pre-print
In this work, we explore unsupervised domain adaptation in retinal vessel segmentation by using entropy-based adversarial learning and transfer normalization layer to train a segmentation network, which  ...  It normalizes the features of each domain separately to compensate for the domain distribution gap.  ...  To address these challenges mentioned above, this paper proposes a new unsupervised domain adaptation framework with adversarial learning and transfer normalization for crossdomain retinal vessel segmentation  ... 
arXiv:2108.01821v1 fatcat:h4ju5huljjd6pfebi3aylfcnnq

Distributionally Robust Learning with Stable Adversarial Training [article]

Jiashuo Liu, Zheyan Shen, Peng Cui, Linjun Zhou, Kun Kuang, Bo Li
2021 arXiv   pre-print
Machine learning algorithms with empirical risk minimization are vulnerable under distributional shifts due to the greedy adoption of all the correlations found in training data.  ...  In this paper, we propose a novel Stable Adversarial Learning (SAL) algorithm that leverages heterogeneous data sources to construct a more practical uncertainty set and conduct differentiated robustness  ...  Different from domain adaptation, domain generalization methods propose to learn a domain-invariant classifier with multiple training domains. Muandet et al.  ... 
arXiv:2106.15791v1 fatcat:sjj3tth2kvbmrdjlj3jgnupotm

A Survey of Unsupervised Deep Domain Adaptation [article]

Garrett Wilson, Diane J. Cook
2020 arXiv   pre-print
We follow this with a look at application areas and open research directions.  ...  Many single-source and typically homogeneous unsupervised deep domain adaptation approaches have thus been developed, combining the powerful, hierarchical representations from deep learning with domain  ...  However, none of these normalization techniques were developed with domain adaptation in mind. In the case of domain adaptation, the normalization statistics for each domain likely differ.  ... 
arXiv:1812.02849v3 fatcat:paefg5cywbe3tjsp6dffnwkvxy

Sliced Wasserstein Distance for Learning Gaussian Mixture Models

Soheil Kolouri, Gustavo K. Rohde, Heiko Hoffmann
2018 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition  
Gaussian mixture models (GMM) are powerful parametric tools with many applications in machine learning and computer vision.  ...  Specifically, we propose minimizing the sliced-Wasserstein distance between the mixture model and the data distribution with respect to the GMM parameters.  ...  estimation in various applications concerning machine learning, computer vision, and signal/image analysis.  ... 
doi:10.1109/cvpr.2018.00361 dblp:conf/cvpr/KolouriRH18 fatcat:w7ftttndnzbtjk47gxbyvzfotu

A Generative Framework for Zero-Shot Learning with Adversarial Domain Adaptation [article]

Varun Khare, Divyat Mahajan, Homanga Bharadhwaj, Vinay Verma, Piyush Rai
2020 arXiv   pre-print
Training this model with adversarial domain adaptation further provides robustness against the distribution mismatch between the data from seen and unseen classes.  ...  Our framework addresses the problem of domain shift between the seen and unseen class distributions in zero-shot learning and minimizes the shift by developing a generative model trained via adversarial  ...  Acknowledgements: VKV acknowledges support from Visvesvaraya PhD Fellowship and PR acknowledges support from Visvesvaraya Young Faculty Fellowship.  ... 
arXiv:1906.03038v3 fatcat:qbwxeplhijfjhh6srrnflpzhaa

Modeling Personalization in Continuous Space for Response Generation via Augmented Wasserstein Autoencoders

Zhangming Chan, Juntao Li, Xiaopeng Yang, Xiuying Chen, Wenpeng Hu, Dongyan Zhao, Rui Yan
2019 Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)  
Variational autoencoders (VAEs) and Wasserstein autoencoders (WAEs) have achieved noticeable progress in open-domain response generation.  ...  In this work, we improve the WAE for response generation. In addition to the utterance-level information, we also model user-level information in latent continue space.  ...  Acknowledgments We would like to thank the reviewers for their constructive comments.  ... 
doi:10.18653/v1/d19-1201 dblp:conf/emnlp/ChanLYCHZY19 fatcat:g7u2hw62qfeulfxdb3sg3qofr4

Better Safe Than Sorry: Preventing Delusive Adversaries with Adversarial Training [article]

Lue Tao, Lei Feng, Jinfeng Yi, Sheng-Jun Huang, Songcan Chen
2021 arXiv   pre-print
Both theoretical and empirical results vote for adversarial training when confronted with delusive adversaries.  ...  in a natural setting.  ...  Acknowledgments and Disclosure of Funding This work was supported by the National Natural Science Foundation of China (Grant No. 62076124, 62076128) and the National Key R&D Program of China (2020AAA0107000  ... 
arXiv:2102.04716v4 fatcat:dzm4rfswabbstckbmgybp6sr6q
« Previous Showing results 1 — 15 out of 708 results