Filters








59,722 Hits in 5.3 sec

Reducing Domain Gap by Reducing Style Bias [article]

Hyeonseob Nam, HyunJae Lee, Jongchan Park, Wonjun Yoon, Donggeun Yoo
2021 arXiv   pre-print
Inspired by this, we propose to reduce the intrinsic style bias of CNNs to close the gap between domains.  ...  Extensive experiments show that our method effectively reduces the style bias and makes the model more robust under domain shift.  ...  Our method is orthogonal to the majority of existing domain adaptation and generalization techniques that utilize Figure 1 : Our Style-Agnostic Network (SagNet) reduces style bias to reduce domain gap  ... 
arXiv:1910.11645v4 fatcat:ugujqj3jxrffjicyz5oouv5gna

Invariant Content Synergistic Learning for Domain Generalization of Medical Image Segmentation [article]

Yuxin Kang, Hansheng Li, Xuan Zhao, Dongqing Hu, Feihong Liu, Lei Cui, Jun Feng, Lin Yang
2022 arXiv   pre-print
First, ICSL mixes the style of training instances to perturb the training distribution. That is to say, more diverse domains or styles would be made available for training DCNNs.  ...  In this paper, we propose a method, named Invariant Content Synergistic Learning (ICSL), to improve the generalization ability of DCNNs on unseen datasets by controlling the inductive bias.  ...  It is reasonable to assume a correlation between DCNNs' inductive bias and their capability to handle distribution gaps: reducing style bias may reduces domain discrepancy.  ... 
arXiv:2205.02845v1 fatcat:l4djjwctnnhvld2zyyzzwa4xdm

ADeLA: Automatic Dense Labeling with Attention for Viewpoint Adaptation in Semantic Segmentation [article]

Yanchao Yang, Hanxiang Ren, He Wang, Bokui Shen, Qingnan Fan, Youyi Zheng, C. Karen Liu, Leonidas Guibas
2021 arXiv   pre-print
Despite the lack of supervision, the view transformation network can still generalize to semantic images thanks to the inductive bias introduced by the attention mechanism.  ...  We describe an unsupervised domain adaptation method for image content shift caused by viewpoint changes for a semantic segmentation task.  ...  strengthened by the proposed functional transportation strategy, in reducing the domain gaps.  ... 
arXiv:2107.14285v1 fatcat:2eqwcsmrazgt3myczbbtlglksm

SBSGAN: Suppression of Inter-Domain Background Shift for Person Re-Identification [article]

Yan Huang, Qiang Wu, JingSong Xu, Yi Zhong
2019 arXiv   pre-print
Cross-domain person re-identification (re-ID) is challenging due to the bias between training and testing domains.  ...  Unlike simply removing backgrounds using binary masks, SBSGAN allows the generator to decide whether pixels should be preserved or suppressed to reduce segmentation errors caused by noisy foreground masks  ...  Acknowledgment This research is supported by an Australian Government Research Training Program Scholarship.  ... 
arXiv:1908.09086v1 fatcat:mdr3s6fjdjb6petzzdozf3uu6u

SBSGAN: Suppression of Inter-Domain Background Shift for Person Re-Identification

Yan Huang, Qiang Wu, Jingsong Xu, Yi Zhong
2019 2019 IEEE/CVF International Conference on Computer Vision (ICCV)  
Cross-domain person re-identification (re-ID) is challenging due to the bias between training and testing domains.  ...  Unlike simply removing backgrounds using binary masks, SBSGAN allows the generator to decide whether pixels should be preserved or suppressed to reduce segmentation errors caused by noisy foreground masks  ...  Acknowledgment This research is supported by an Australian Government Research Training Program Scholarship.  ... 
doi:10.1109/iccv.2019.00962 dblp:conf/iccv/HuangWXZ19 fatcat:om5rg7gomndozavmfdtxmaqjxq

Style Variable and Irrelevant Learning for Generalizable Person Re-identification [article]

Haobo Chen, Chuyang Zhao, Kai Tu, Junru Chen, Yadong Li, Boxun Li
2022 arXiv   pre-print
Specifically, we design a Style Jitter Module (SJM) in SVIL. The SJM module can enrich the style diversity of the specific source domain and reduce the style differences of various source domains.  ...  In this paper, we first verify through an experiment that style factors are a vital part of domain bias.  ...  METHOD This purpose of our method is to narrow the domain gap caused by style variations in different domains.  ... 
arXiv:2209.05235v1 fatcat:ji4wcdjgvvh7dapl44w7tdggjq

Imitating Targets from all sides: An Unsupervised Transfer Learning method for Person Re-identification [article]

Jiajie Tian, Zhu Teng, Rui Li, Yan Li, Baopeng Zhang, Jianping Fan
2021 arXiv   pre-print
learn a discriminative representation across domains; 3) exploiting the underlying commonality across different domains from the class-style space to improve the generalization ability of re-ID models  ...  In terms of this issue, given a labelled source training set and an unlabelled target training set, we propose an unsupervised transfer learning method characterized by 1) bridging inter-dataset bias and  ...  We consider these differences as the domain gap or inter-domain bias.  ... 
arXiv:1904.05020v2 fatcat:d7tmspdg4fbjngwdrhsaq5gy7a

Style-transfer GANs for bridging the domain gap in synthetic pose estimator training [article]

Pavel Rojtberg, Thomas Pöllabauer, Arjan Kuijper
2020 arXiv   pre-print
The obtained models are then used either during training or inference to bridge the domain gap.  ...  However, producing such data is a non-trivial task as current CNN architectures are sensitive to the domain gap between real and synthetic data.  ...  Likely, because we are able to reduce the texture bias [5] of the model.  ... 
arXiv:2004.13681v2 fatcat:4mfmp22l35fwtbhuhpudvm5bsu

Identity Preserving Generative Adversarial Network for Cross-Domain Person Re-identification

Jialun Liu, Wenhui Li, Hongbin Pei, Ying Wang, Feng Qu, You Qu, Yuhao Chen
2019 IEEE Access  
The previous methods directly reduce the bias by image-to-image style translation between the source and the target domain in an unsupervised manner.  ...  However, these methods only consider the rough bias between the source domain and the target domain but neglect the detailed bias between the source domain and the target camera domains (divided by camera  ...  Firstly, in order to reduce the gaps between two different domains, we translate the styles of images from the source domain to the target camera domains.  ... 
doi:10.1109/access.2019.2933910 fatcat:llqipk4og5fbrlbbsqpetzgiiq

From Paraphrasing to Semantic Parsing: Unsupervised Semantic Parsing via Synchronous Semantic Decoding [article]

Shan Wu, Bo Chen, Chunlei Xin, Xianpei Han, Le Sun, Weipeng Zhang, Jiansong Chen, Fan Yang, Xunliang Cai
2021 arXiv   pre-print
In this paper, we propose an unsupervised semantic parsing method - Synchronous Semantic Decoding (SSD), which can simultaneously resolve the semantic gap and the structure gap by jointly leveraging paraphrasing  ...  Semantic parsing is challenging due to the structure gap and the semantic gap between utterances and logical forms.  ...  Moreover, this work is supported by the National Key  ... 
arXiv:2106.06228v1 fatcat:qjz7uhi4ivctpnzcdazge5jsm4

Unsupervised Domain Adaptation for Object Detection via Cross-Domain Semi-Supervised Learning [article]

Fuxun Yu, Di Wang, Yinpeng Chen, Nikolaos Karianakis, Tong Shen, Pei Yu, Dimitrios Lymberopoulos, Sidi Lu, Weisong Shi, Xiang Chen
2021 arXiv   pre-print
In this work, we show that such adversarial-based methods can only reduce the domain style gap, but cannot address the domain content distribution gap that is shown to be important for object detectors  ...  To overcome this limitation, we propose the Cross-Domain Semi-Supervised Learning (CDSSL) framework by leveraging high-quality pseudo labels to learn better representations from the target domain directly  ...  To reduce non-identical data distribution gap, an intermediate domain is generated by transforming source domain image style to match with target domain.  ... 
arXiv:1911.07158v5 fatcat:avo3zydua5dalo7e6ggnik3wuy

Permuted AdaIN: Reducing the Bias Towards Global Statistics in Image Classification [article]

Oren Nuriel, Sagie Benaim, Lior Wolf
2021 arXiv   pre-print
In the setting of domain adaptation and domain generalization, our method achieves state of the art results on the transfer learning task from GTAV to Cityscapes and on the PACS benchmark.  ...  By choosing the random permutation with probability p and the identity permutation otherwise, one can control the effect's strength.  ...  Zhang and Zhu [47] show that adversarial training reduces texture bias. Carlucci et al., [4] propose to reduce texture bias, by training the network to solve jigsaw puzzles.  ... 
arXiv:2010.05785v3 fatcat:3j5biarwdvc7fn34kyotfiokwi

Identity Preserving Generative Adversarial Network for Cross-Domain Person Re-identification [article]

Jialun Liu
2018 arXiv   pre-print
Most existing person re-identification (re-ID) models often fail to generalize well from the source domain where the models are trained to a new target domain without labels, because of the bias between  ...  Experimental results on Market-1501 and DukeMTMC-reID show that the images generated by IPGAN are more suitable for cross-domain person re-identification.  ...  The above methods attempt to reduce the bias between source and target domain on image space and feature space, however they all ignore the divergence of image style caused by target camera domains.  ... 
arXiv:1811.11510v1 fatcat:evzd2p56jrbqhkyyrkyedareum

StyleAugment: Learning Texture De-biased Representations by Style Augmentation without Pre-defined Textures [article]

Sanghyuk Chun, Song Park
2021 arXiv   pre-print
A simple attempt by augmenting training images using the artistic style transfer method, called Stylized ImageNet, can reduce the texture bias.  ...  We propose a StyleAugment by augmenting styles from the mini-batch.  ...  For example, to reduce the texture bias of ResNet [13] , Bahng et al.  ... 
arXiv:2108.10549v1 fatcat:flhiocq6vrby7cgysbfqov43g4

Informative Dropout for Robust Representation Learning: A Shape-bias Perspective [article]

Baifeng Shi, Dinghuai Zhang, Qi Dai, Zhanxing Zhu, Yadong Mu, Jingdong Wang
2020 arXiv   pre-print
In this work, we attempt at improving various kinds of robustness universally by alleviating CNN's texture bias.  ...  With inspiration from the human visual system, we propose a light-weight model-agnostic method, namely Informative Dropout (InfoDrop), to improve interpretability and reduce texture bias.  ...  Yadong Mu is partly supported by National Key R&D Program of China (2018AAA0100702) and Beijing Natural Science Foundation (Z190001). Dr.  ... 
arXiv:2008.04254v1 fatcat:laub5usrjnechhxvxh3f3bl3wa
« Previous Showing results 1 — 15 out of 59,722 results