Filters








71,156 Hits in 3.8 sec

Revisiting Batch Normalization For Practical Domain Adaptation [article]

Yanghao Li, Naiyan Wang, Jianping Shi, Jiaying Liu, Xiaodi Hou
2016 arXiv   pre-print
By modulating the statistics in all Batch Normalization layers across the network, our approach achieves deep adaptation effect for domain adaptation tasks.  ...  In this paper, we propose a simple yet powerful remedy, called Adaptive Batch Normalization (AdaBN) to increase the generalization ability of a DNN.  ...  ADAPTIVE BATCH NORMALIZATION Given the pre-trained DNN model and a target domain, our Adaptive Batch Normalization algorithm is as follows 1 : Algorithm 1 Adaptive Batch Normalization (AdaBN) for neuron  ... 
arXiv:1603.04779v4 fatcat:7ip74ozq2ngszacj4mf4clpep4

MixNorm: Test-Time Adaptation Through Online Normalization Estimation [article]

Xuefeng Hu, Gokhan Uzunbas, Sirius Chen, Rui Wang, Ashish Shah, Ram Nevatia, Ser-Nam Lim
2021 arXiv   pre-print
Unsupervised Domain Adaptation and Zero-Shot Classification.  ...  However, in practice, these two assumptions may not stand, the reasons for which we propose two new evaluation settings where batch sizes are arbitrary and multiple distributions are considered.  ...  behavior for large batch size, and MixNorm behavior for small batch size.  ... 
arXiv:2110.11478v1 fatcat:po2tg35wpfciroi434rv6gkrx4

Learning Cross-domain Semantic-Visual Relation for Transductive Zero-Shot Learning [article]

Jianyang Zhang, Fengmao Lv, Guowu Yang, Lei Feng, Yufeng Yu, Lixin Duan
2020 arXiv   pre-print
For this redrawn domain adaptation problem, we propose to use a domain-specific batch normalization component to reduce the domain discrepancy of semantic-visual pairs.  ...  For ZSL, the source and target domains have different tasks/label spaces. Hence, ZSL is usually considered as a more difficult transfer setting compared with domain adaptation.  ...  The authors would like to thank the anonymous reviewers for the careful reading of this paper and the constructive comments they provided.  ... 
arXiv:2003.14105v1 fatcat:4j7rwiw5pbbifnpxqpn5l4bu5u

Test-time Batch Statistics Calibration for Covariate Shift [article]

Fuming You, Jingjing Li, Zhou Zhao
2021 arXiv   pre-print
Conventional approaches like domain adaptation requires the pre-collected target data for iterative training, which is impractical in real-world applications.  ...  To this end, we present a general formulation α-BN to calibrate the batch statistics by mixing up the source and target statistics for both alleviating the domain shift and preserving the discriminative  ...  Normalization and Adaptation Batch normalization (BN) is widely-used in DNNs nowadays for stable training and fast converge.  ... 
arXiv:2110.04065v1 fatcat:7sdyjroe6reshmfhsfl4cdh2fm

Unsupervised BatchNorm Adaptation (UBNA): A Domain Adaptation Method for Semantic Segmentation Without Using Source Domain Representations [article]

Marvin Klingner, Jan-Aike Termöhlen, Jacob Ritterbach, Tim Fingscheidt
2021 arXiv   pre-print
Specifically, we partially adapt the normalization layer statistics to the target domain using an exponentially decaying momentum factor, thereby mixing the statistics from both domains.  ...  By evaluation on standard UDA benchmarks for semantic segmentation we show that this is superior to a model without adaptation and to baseline approaches using statistics from the target domain only.  ...  When adapting such a given model one might only have access to target domain data, since the source domain data cannot be passed on either for practical reasons or due to data privacy issues.  ... 
arXiv:2011.08502v2 fatcat:wtxlauuicrcmtntmyq6bazt3la

The Norm Must Go On: Dynamic Unsupervised Domain Adaptation by Normalization [article]

M. Jehanzeb Mirza, Jakub Micorek, Horst Possegger, Horst Bischof
2022 arXiv   pre-print
By continuously adapting the statistics of the batch normalization layers we modify the feature representations of the model.  ...  Our approach is simple, yet effective and can be applied to any architecture which uses batch normalization as one of its components.  ...  Acknowledgments We gratefully acknowledge the financial support by the Austrian Federal Ministry for Digital and Economic Affairs, the National Foundation for Research, Technology and Development and the  ... 
arXiv:2112.00463v2 fatcat:du76jsbx5zb5vonjabbeaaslpa

Adapting Off-the-Shelf Source Segmenter for Target Medical Image Segmentation [article]

Xiaofeng Liu, Fangxu Xing, Chao Yang, Georges El Fakhri, Jonghye Woo
2021 arXiv   pre-print
batch-wise normalization statistics adaptation framework.  ...  To alleviate this, in this work, we target source free UDA for segmentation, and propose to adapt an "off-the-shelf" segmentation model pre-trained in the source domain to the target domain, with an adaptive  ...  Adaptive source-relaxed batch-wise statistics adaptation Early attempts of BN for UDA simply added BN in the target domain, without the interaction with the source domain [7] .  ... 
arXiv:2106.12497v1 fatcat:veavv5vu6fag7jq3bkqdurj4qm

Assessing Machine Learning Approaches to Address IoT Sensor Drift [article]

Haining Zheng, Antonio Paiva
2021 arXiv   pre-print
We then discuss several issues identified with current approaches and outline directions for future research to tackle them.  ...  In this paper we study and test several approaches from the literature with regard to their ability to cope with and adapt to sensor drift under realistic conditions.  ...  For batch 1, this normalized each feature separately to the [0, 1] interval.  ... 
arXiv:2109.04356v1 fatcat:uy4dvdayybfmfdwaai2n6cu2xi

Revisiting Batch Normalization for Improving Corruption Robustness [article]

Philipp Benz, Chaoning Zhang, Adil Karjauv, In So Kweon
2021 arXiv   pre-print
In this work, we interpret corruption robustness as a domain shift and propose to rectify batch normalization (BN) statistics for improving model robustness.  ...  For example, on ImageNet-C, statistics adaptation improves the top1 accuracy of ResNet50 from 39.2% to 48.7%.  ...  Adaptive Batch Normalization (AdaBN), has been proposed in [36] showing that adapting the statistics with target domain images improves the performance on the target domain.  ... 
arXiv:2010.03630v4 fatcat:63q7gaqz55blfn6nw2lqnvljmm

Transferable Normalization: Towards Improving Transferability of Deep Neural Networks

Ximei Wang, Ying Jin, Mingsheng Long, Jianmin Wang, Michael I. Jordan
2019 Neural Information Processing Systems  
Empirical results justify that TransNorm not only improves classification accuracies but also accelerates convergence for mainstream DNN-based domain adaptation methods.  ...  However, such transferability becomes weak when the target dataset is fully unlabeled as in Unsupervised Domain Adaptation (UDA).  ...  Batch Normalization (BN) Motivated by the wide practice that network training converges faster if its inputs are whitened [14, 42] , Batch Normalization (BN) [12] was designed to transform the features  ... 
dblp:conf/nips/WangJLWJ19 fatcat:eowc4waufnarhjg6koy6qa432i

ConDA: Continual Unsupervised Domain Adaptation [article]

Abu Md Niamul Taufique, Chowdhury Sadman Jahan, Andreas Savakis
2021 arXiv   pre-print
Domain Adaptation (DA) techniques are important for overcoming the domain shift between the source domain used for training and the target domain where testing takes place.  ...  However, current DA methods assume that the entire target domain is available during adaptation, which may not hold in practice.  ...  Domain adaptation research has been exploring such practical scenarios where adaptation is done without using source data.  ... 
arXiv:2103.11056v2 fatcat:x362gfmdljg7tdfoodr3j5ugvu

Test-time Batch Normalization [article]

Tao Yang, Shenglong Zhou, Yuwang Wang, Yan Lu, Nanning Zheng
2022 arXiv   pre-print
In this paper, targeting of alleviating distribution shift in test time, we revisit the batch normalization (BN) in the training process and reveals two key insights benefiting test-time optimization:  ...  We verify the effectiveness of our method on two typical settings with distribution shift, i.e., domain generalization and robustness tasks.  ...  Some works find that only adapting the batch statistics is effective for domain adaptation or robustness. For example, Li et al. [31] use target domain batch statistics for domain adaptation.  ... 
arXiv:2205.10210v1 fatcat:f3dzrux3fbfe3j264jrfc72liy

Improving robustness against common corruptions by covariate shift adaptation [article]

Steffen Schneider, Evgenia Rusak, Luisa Eck, Oliver Bringmann, Wieland Brendel, Matthias Bethge
2020 arXiv   pre-print
Replacing the activation statistics estimated by batch normalization on the training set with the statistics of the corrupted images consistently improves the robustness across 25 different popular computer  ...  Even adapting to a single sample improves robustness for the ResNet-50 and AugMix models, and 32 samples are sufficient to improve the current state of the art for a ResNet-50 architecture.  ...  for reducing covariate shift induced by common corruptions We propose to use a well-known tool from domain adaptation-adapting batch normalization statistics [5, 6] -as a simple baseline to increase  ... 
arXiv:2006.16971v2 fatcat:xwtrxkhidnc2dmg75cppkfxgp4

Learning to Optimize Domain Specific Normalization for Domain Generalization [article]

Seonguk Seo, Yumin Suh, Dongwan Kim, Geeho Kim, Jongwoo Han, Bohyung Han
2020 arXiv   pre-print
For each domain, the activations are normalized by a weighted average of multiple normalization statistics.  ...  Specifically, we employ batch and instance normalizations in our implementation to identify the best combination of these two normalization methods in each domain.  ...  Multi-Source Domain Adaptation Multi-source domain adaptation can be considered as the middle-ground between domain adaptation and generalization, where data from multiple source domains are used for training  ... 
arXiv:1907.04275v3 fatcat:4lrn5su73fa7jjl47o52jnggre

Continual BatchNorm Adaptation (CBNA) for Semantic Segmentation [article]

Marvin Klingner and Mouadh Ayache and Tim Fingscheidt
2022 arXiv   pre-print
Our method Continual BatchNorm Adaptation (CBNA) modifies the source domain statistics in the batch normalization layers, using target domain images in an unsupervised fashion, which yields consistent  ...  Usually, this problem is addressed by unsupervised domain adaptation (UDA) approaches trained either simultaneously on source and target domain datasets or even source-free only on target data in an offline  ...  Revisiting the Batch Normalization Layer, Notations As our adaptation method relies on the usage of batch normalization (BN) layers, we briefly revisit the BN operation for the scope of a fully convolutional  ... 
arXiv:2203.01074v2 fatcat:wsobnfvxpvfxvd2bw77haliueq
« Previous Showing results 1 — 15 out of 71,156 results