Filters








11,744 Hits in 7.2 sec

Conditional entropy minimization principle for learning domain invariant representation features [article]

Thuan Nguyen, Boyang Lyu, Prakash Ishwar, Matthias Scheutz, Shuchin Aeron
2022 arXiv   pre-print
Invariance-principle-based methods such as Invariant Risk Minimization (IRM), have recently emerged as promising approaches for Domain Generalization (DG).  ...  To address this, we propose a framework based on the conditional entropy minimization (CEM) principle to filter-out the spurious invariant features leading to a new algorithm with a better generalization  ...  Over the past decade, many methods are emerged for better representation learning which can be categorized into two different learning principles: domain-invariant representation learning and feature disentanglement  ... 
arXiv:2201.10460v4 fatcat:lo6ubbccmbcofmh7ggimq323l4

Improving Unsupervised Domain Adaptation with Variational Information Bottleneck [article]

Yuxuan Song, Lantao Yu, Zhangjie Cao, Zhiming Zhou, Jian Shen, Shuo Shao, Weinan Zhang, Yong Yu
2019 arXiv   pre-print
In this paper, from the perspective of information theory, we show that representation matching is actually an insufficient constraint on the feature space for obtaining a model with good generalization  ...  Domain adaptation aims to leverage the supervision signal of source domain to obtain an accurate model for target domain, where the labels are not available.  ...  In this paper, inspired by the information bottleneck principle, we propose a simple yet effective regularization technique for domain adaptation methods by combining conditional entropy minimization and  ... 
arXiv:1911.09310v1 fatcat:tj4yxoebebab3jufajtm7w4wbe

Learning Invariant Representations and Risks for Semi-supervised Domain Adaptation [article]

Bo Li and Yezhen Wang, Shanghang Zhang, Dongsheng Li, Trevor Darrell, Kurt Keutzer, Han Zhao
2021 arXiv   pre-print
The bound suggests a principled way to obtain target generalization, i.e. by aligning both the marginal and conditional distributions across domains in feature space.  ...  Motivated by this, we then introduce the LIRR algorithm for jointly Learning Invariant Representations and Risks.  ...  Our observation naturally leads to a principled way of learning invariant representations (to minimize discrepancy between marginal feature distributions) and risks (to minimize discrepancy between conditional  ... 
arXiv:2010.04647v3 fatcat:rzmhy3ry6vaj7o7zz6sapvs44i

A Survey on Deep Domain Adaptation for LiDAR Perception [article]

Larissa T. Triess and Mariella Dreissig and Christoph B. Rist and J. Marius Zöllner
2021 arXiv   pre-print
This means, the perception systems are exposed to drastic domain shifts, like changes in weather conditions, time-dependent aspects, or geographic regions.  ...  There already exists a number of survey papers for domain adaptation on camera images, however, a survey for LiDAR perception is absent.  ...  within the project "KI Delta Learning" (Förderkennzeichen 19A19013A).  ... 
arXiv:2106.02377v2 fatcat:jfuqpz4cx5bj7jda6brq666oam

Conditional Adversarial Networks for Multi-Domain Text Classification [article]

Yuan Wu, Diana Inkpen, Ahmed El-Roby
2021 arXiv   pre-print
The proposed CAN introduces a conditional domain discriminator to model the domain variance in both shared feature representations and class-aware information simultaneously and adopts entropy conditioning  ...  shared features, for multi-domain text classification (MDTC).  ...  The entropy conditioning empowers the entropy minimization principle (Grandvalet and Bengio, 2005) and controls the certainty of the predictions, enabling CAN have the ability to generalize on unseen  ... 
arXiv:2102.10176v1 fatcat:cihqrvoicfhflkiu5rl6ezb7oe

Gated Information Bottleneck for Generalization in Sequential Environments [article]

Francesco Alesiani, Shujian Yu, Xi Yu
2021 arXiv   pre-print
, a more practical scenario where invariant risk minimization (IRM) fails.  ...  Meanwhile, we also establish the connection between IB theory and invariant causal representation learning, and observed that GIB demonstrates appealing performance when different environments arrive sequentially  ...  GIB is easy to optimize and encourages the learning of invariant representations over different environments.  ... 
arXiv:2110.06057v1 fatcat:2d3hfyh4ang2bleiaz3s3ar2aa

Self-adaptive Re-weighted Adversarial Domain Adaptation [article]

Shanshan Wang, Lei Zhang
2020 arXiv   pre-print
by the conditional entropy.  ...  In order to promote positive transfer and combat negative transfer, we reduce the weight of the adversarial loss for aligned features while increasing the adversarial force for those poorly aligned measured  ...  To make the best of conditional distribution, entropy minimization principle is adopted to enhance discrimination of learned models for target data by following .  ... 
arXiv:2006.00223v2 fatcat:nsreyyy32jakljejrerlatfyya

Invariant Information Bottleneck for Domain Generalization [article]

Bo Li, Yifei Shen, Yezhen Wang, Wenzhen Zhu, Colorado J. Reed, Jun Zhang, Dongsheng Li, Kurt Keutzer, Han Zhao
2022 arXiv   pre-print
Invariant risk minimization (IRM) has recently emerged as a promising alternative for domain generalization.  ...  IIB aims at minimizing invariant risks for nonlinear classifiers and simultaneously mitigating the impact of pseudo-invariant features and geometric skews.  ...  Invariant Risk Minimization The above approaches enforces the invariance of the learned representations.  ... 
arXiv:2106.06333v6 fatcat:aha2tsr7ezhqlawex7zxgi4abi

Joint covariate-alignment and concept-alignment: a framework for domain generalization [article]

Thuan Nguyen, Boyang Lyu, Prakash Ishwar, Matthias Scheutz, Shuchin Aeron
2022 arXiv   pre-print
namely, Maximum Mean Discrepancy (MMD) and covariance Alignment (CORAL), and use an Invariant Risk Minimization (IRM)-based approach for concept alignment.  ...  Particularly, our framework proposes to jointly minimize both the covariate-shift as well as the concept-shift between the seen domains for a better performance on the unseen domain.  ...  In [23] , Nguyen et al. developed a conditional entropy minimization principle to remove spurious invariant features from the invariant features learned by the IRM algorithm [12] .  ... 
arXiv:2208.00898v1 fatcat:momolkc5fjfwtenxg3vq24aos4

Trajectory Similarity Learning with Auxiliary Supervision and Optimal Matching

Hanyuan Zhang, Xinyu Zhang, Qize Jiang, Baihua Zheng, Zhenbang Sun, Weiwei Sun, Changhu Wang
2020 Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence  
In this paper, we propose a novel trajectory representation learning framework Traj2SimVec that performs scalable and robust trajectory similarity computation.  ...  Learning-based methods can map trajectories into a uniform embedding space to calculate the similarity of two trajectories with embeddings in constant time.  ...  To make the best of conditional distribution, entropy minimization principle is adopted to enhance discrimination of learned models for target data by following [Long et al., 2016] .  ... 
doi:10.24963/ijcai.2020/440 dblp:conf/ijcai/WangZ20 fatcat:w3ulpemfvjeshe4idiidofr7vy

Action Recognition with Domain Invariant Features of Skeleton Image [article]

Han Chen and Yifan Jiang and Hanseok Ko
2021 arXiv   pre-print
We introduce a two-level domain adversarial learning to align the features of skeleton images from different view angles or subjects, respectively, thus further improve the generalization.  ...  The recent Convolutional Neural Network (CNN)-based methods have shown commendable performance in learning spatio-temporal representations for skeleton sequence, which use skeleton image as input to a  ...  Our network learned the robust features for action recognition tasks by two-level domain adversarial learning strategy and entropy minimization.  ... 
arXiv:2111.11250v1 fatcat:ssahsfrpyrd77nee6jb4bamdhe

Invariant Information Bottleneck for Domain Generalization

Bo Li, Yifei Shen, Yezhen Wang, Wenzhen Zhu, Colorado Reed, Dongsheng Li, Kurt Keutzer, Han Zhao
2022 PROCEEDINGS OF THE THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE AND THE TWENTY-EIGHTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE  
Invariant risk minimization (IRM) has recently emerged as a promising alternative for domain generalization.  ...  IIB aims at minimizing invariant risks for nonlinear classifiers and simultaneously mitigating the impact of pseudo-invariant features and geometric skews.  ...  Invariant Risk Minimization The above approaches enforces the invariance of the learned representations.  ... 
doi:10.1609/aaai.v36i7.20703 fatcat:mbsmsamovvbwpbz5ubxkj72y4u

SSDAN: Multi-Source Semi-Supervised Domain Adaptation Network for Remote Sensing Scene Classification

Tariq Lasloum, Haikel Alhichri, Yakoub Bazi, Naif Alajlan
2021 Remote Sensing  
At the same time, the model is trained to learn domain invariant features using another loss function based on entropy computed over the unlabeled target samples.  ...  This entropy loss, called minimax loss, needs to be maximized with respect to the classification module to learn features that are domain-invariant (hence removing the data shift), and at the same time  ...  In [10] , a novel way, based on Wasserstein Distance Guided Representation Learning (WDGRL), of learning domain invariant feature representations is proposed.  ... 
doi:10.3390/rs13193861 fatcat:v5p5ii73b5dwvnpd64nzov3tpu

Domain Adaptation with Invariant Representation Learning: What Transformations to Learn?

Petar Stojanov, Zijian Li, Mingming Gong, Ruichu Cai, Jaime G. Carbonell, Kun Zhang
2021 Neural Information Processing Systems  
We provide reasoning why when the supports of the source and target data from overlap, any map of X that is fixed across domains may not be suitable for domain adaptation via invariant features.  ...  This has been shown to be insufficient for generating optimal representation for classification, and to find conditionally invariant representations, usually strong assumptions are needed.  ...  Acknowledgements We are very grateful to the anonymous reviewers for their help in improving the paper. This work  ... 
dblp:conf/nips/StojanovLGCCZ21 fatcat:nxaxf2yotnc4fjy4ruev3mzs4a

Conditional Adversarial Domain Adaptation [article]

Mingsheng Long, Zhangjie Cao, Jianmin Wang, Michael I. Jordan
2018 arXiv   pre-print
Adversarial learning has been embedded into deep networks to learn disentangled and transferable representations for domain adaptation.  ...  Conditional domain adversarial networks (CDANs) are designed with two novel conditioning strategies: multilinear conditioning that captures the cross-covariance between feature representations and classifier  ...  Acknowledgments We thank Yuchen Zhang at Tsinghua University for insightful discussions.  ... 
arXiv:1705.10667v4 fatcat:zoskgbovczajvdkwfhafijkrhi
« Previous Showing results 1 — 15 out of 11,744 results