34,694 Hits in 3.8 sec

Training on Test Data with Bayesian Adaptation for Covariate Shift [article]

Aurick Zhou, Sergey Levine
2021 arXiv   pre-print
We evaluate our method on a variety of distribution shifts for image classification, including image corruptions, natural distribution shifts, and domain adaptation settings, and show that our method improves  ...  In this paper, we derive a Bayesian model that provides for a well-defined relationship between unlabeled inputs under distributional shift and model parameters, and show how approximate inference in this  ...  Discussion We presented Bayesian Adaptation for Covariate Shift (BACS), a Bayesian approach for utilizing test-time adaptation to obtain both improved accuracy and well-calibrated uncertainty estimates  ... 
arXiv:2109.12746v1 fatcat:65bzewi3x5dy7byfm7s7d4lv4a

Representation Bayesian Risk Decompositions and Multi-Source Domain Adaptation [article]

Xi Wu, Yang Guo, Jiefeng Chen, Yingyu Liang, Somesh Jha, Prasad Chalasani
2020 arXiv   pre-print
label divergence, and (3) representation covariate shift.  ...  For Single-Source Domain Adaptation, we give an exact decomposition (an equality) of the target risk, via a natural hybrid argument, as sum of three factors: (1) source risk, (2) representation conditional  ...  covariate shift.  ... 
arXiv:2004.10390v2 fatcat:u4hvbva6ejhirimfqjiv7g5pku

Enhancing the Performance of Maximum–Likelihood Gaussian EDAs Using Anticipated Mean Shift [chapter]

Peter A. N. Bosman, Jörn Grahl, Dirk Thierens
2008 Lecture Notes in Computer Science  
We then provide a simple, but effective technique called Anticipated Mean Shift (AMS) that removes this inefficiency.  ...  Adaptive Variance Scaling (AVS) and Standard-Deviation Ratio triggering (SDR)). Here we focus on a second source of inefficiency that is not removed by existing remedies.  ...  The combination of SDR, AVS and AMS adaptively changes both the covariance matrix and the mean-shift.  ... 
doi:10.1007/978-3-540-87700-4_14 fatcat:vhqepk4gkvc5rgddl4bmozpjje

Dangers of Bayesian Model Averaging under Covariate Shift [article]

Pavel Izmailov, Patrick Nicholson, Sanae Lotfi, Andrew Gordon Wilson
2021 arXiv   pre-print
However, Bayesian neural networks (BNNs) with high-fidelity approximate inference via full-batch Hamiltonian Monte Carlo achieve poor generalization under covariate shift, even underperforming classical  ...  We explain this surprising result, showing how a Bayesian model average can in fact be problematic under covariate shift, particularly in cases where linear dependencies in the input features cause a lack  ...  Covariate shift adaptation by importance weighted cross validation.  ... 
arXiv:2106.11905v2 fatcat:w6ttxplzyfbprldlhmv6vih6km

Achieving real-time object detection and tracking under extreme conditions

Fatih Porikli
2006 Journal of Real-Time Image Processing  
Fig. 8 . 8 Noise performance for the frames 1 (left), 40 (middle), and 200 (right). Top: Mean-shift tracker [16] using color histogram. Bottom: Covariance tracker [20] using 7 features.  ...  To adapt the models accurately, we developed a Bayesian update mechanism [5] that can also estimate the number of required layers.  ... 
doi:10.1007/s11554-006-0011-z fatcat:nzpazpsdd5d5zlqwpoowafyny4

On Robustness of Unsupervised Domain Adaptation for Speaker Recognition

Pierre-Michel Bousquet, Mickael Rouvier
2019 Interspeech 2019  
This study investigates unsupervised domain adaptation,when only a scarce and unlabeled "in-domain" development dataset is available.  ...  Any shift between training and test data, in terms of device, language, duration, noise or other tends to degrade accuracy of speaker detection.  ...  The third system is the unsupervised Bayesian adaptation proposed in [4] . The authors apply the supervised Bayesian adaptation of PLDA [8] to unlabelled inD datasets.  ... 
doi:10.21437/interspeech.2019-1524 dblp:conf/interspeech/BousquetR19 fatcat:kwicqtl3djhahde7edsb3hjyfi

Bayesian Cointegrated Vector Autoregression models incorporating Alpha-stable noise for inter-day price movements via Approximate Bayesian Computation [article]

Gareth W. Peters, Balakrishnan B. Kannan, Ben Lasscock, Chris Mellen, Simon Godsill
2010 arXiv   pre-print
We compare the estimation accuracy of our model and estimation approach to standard frequentist and Bayesian procedures for CVAR models when non-Gaussian price series level shifts are present in the individual  ...  We focus analysis on regularly observed non-Gaussian level shifts that can have significant effect on estimation performance in statistical models failing to account for such level shifts, such as at the  ...  Richard Gerlach for comments and suggestions during this research.  ... 
arXiv:1008.0149v1 fatcat:5ftscnuwt5fctmqswwpezukrry

Inferring Visuomotor Priors for Sensorimotor Learning

Edward J. A. Turnham, Daniel A. Braun, Daniel M. Wolpert, Konrad P. Körding
2011 PLoS Computational Biology  
We developed a Bayesian observer model in order to infer the covariance structure of the subjects' prior, which was found to give high probability to parameter settings consistent with visuomotor rotations  ...  Sensorimotor learning has been shown to depend on both prior expectations and sensory evidence in a way that is consistent with Bayesian integration.  ...  : (A) the 'no-adaptation' model, which assumes the hand hits the centre of the target on all trials; (B) the 'shift' model, which is also a Bayesian observer but assumes the transformation is a translation  ... 
doi:10.1371/journal.pcbi.1001112 pmid:21483475 pmcid:PMC3068921 fatcat:pkbxvs64yje6rgpxtswlg5lp2a

Posterior Covariance Information Criterion [article]

Yukito Iba, Keisuke Yano
2021 arXiv   pre-print
We introduce an information criterion, PCIC, for predictive evaluation based on quasi-posterior distributions.  ...  PCIC is useful in a variety of predictive settings that are not well dealt with in WAIC, including weighted likelihood inference and quasi-Bayesian prediction  ...  Acknowledgement The authors would like to thank Yoshiyuki Ninomiya and Yusaku Ohkubo for fruitful discussions.  ... 
arXiv:2106.13694v3 fatcat:qrwla3yff5drxgn4cj2ori6hn4

Unlabelled Data Improves Bayesian Uncertainty Calibration under Covariate Shift [article]

Alex J. Chan, Ahmed M. Alaa, Zhaozhi Qian, Mihaela van der Schaar
2020 arXiv   pre-print
shift setup.  ...  We show that this approach significantly improves the accuracy of uncertainty quantification on covariate-shifted data sets, with minimal modification to the underlying model architecture.  ...  Acknowledgements We would like to thank the anonymous reviewers for their helpful comments and suggestions.  ... 
arXiv:2006.14988v1 fatcat:qjfjyvdjyjenvisbexbvtndlxa

Drift vs Shift: Decoupling Trends and Changepoint Analysis [article]

Haoxuan Wu, Sean Ryan, David S. Matteson
2022 arXiv   pre-print
Our locally adaptive model-based approach for robustly decoupling combines Bayesian trend filtering and machine learning based regularization.  ...  We introduce a new approach for decoupling trends (drift) and changepoints (shifts) in time series.  ...  As seen, we can turn a very wiggly fit of the Bayesian DLM to a clear separation of drifts vs shifts.  ... 
arXiv:2201.06606v2 fatcat:y6rmrhjgpvccnfqhaplr4qwmia

Adaptive Multiclass Classification for Brain Computer Interfaces

A. Llera, V. Gómez, H. J. Kappen
2014 Neural Computation  
of the adaptation rule introduced by (Vidaurre et al., 2010) for the binary class setting.  ...  We consider the problem of multi-class adaptive classification for brain computer interfaces and propose the use of multi-class pooled mean linear discriminant analysis (MPMLDA), a multi-class generalization  ...  However, also in this case class dependent changes are dominant (both mean shifts and covariance changes).  ... 
doi:10.1162/neco_a_00592 pmid:24684452 fatcat:nxldoaqzxzcr3fzm3cxp7mv5cm

Benchmarking Parameter-Free AMaLGaM on Functions With and Without Noise

Peter A. N. Bosman, Jörn Grahl, Dirk Thierens
2013 Evolutionary Computation  
for short) for numerical optimization.  ...  We study the implications of factorizing the covariance matrix in the Gaussian distribution, to use only a few or no covariances.  ...  The equations for incrementally estimating the covariance matrix and the AMS arê (t) = (1 − η )ˆ (t − 1) + η 1 |S| |S|−1 i=0 (S i −μ(t)) (S i −μ(t)) T (6) μ Shift (t) = (1 − η ShiftShift (t − 1) +  ... 
doi:10.1162/evco_a_00094 pmid:23030365 fatcat:c737mwyx4fhuvaqeyo35xlklbq

Compressive Estimation and Imaging Based on Autoregressive Models

Matteo Testa, Enrico Magli
2016 IEEE Transactions on Image Processing  
More in detail, we introduce a compressive least square estimator for AR(p) parameters and a specific AR(1) compressive Bayesian estimator.  ...  The first is compressive covariance estimation for Toeplitz structured covariance matrices where we tackle the problem with a novel parametric approach based on the estimated AR parameters.  ...  ADAPTIVE COMPRESSIVE IMAGING In this Section we propose a novel algorithm for adaptive compressive imaging based on Bayesian AR(1) inference.  ... 
doi:10.1109/tip.2016.2601444 pmid:27552755 fatcat:rlrkpl7fy5gwfn5jc7bs7uke4a

Hierarchical models for assessing variability among functions

Sam Behseta, Robert E. Kass, Garrick L. Wallstrom
2005 Biometrika  
For example, the first eigenvalue of a sample covariance matrix computed from estimated functions may be biased upwards.  ...  We display a set of estimated neuronal Poisson-process intensity functions where this bias is substantial, and we discuss two methods for accounting for estimation variation.  ...  In §3.3 we take S =Ŝ i to be the posterior covariance matrix obtained from Bayesian adaptive regression splines.  ... 
doi:10.1093/biomet/92.2.419 fatcat:vzriunznqbcaremd7kfdesyqiq
« Previous Showing results 1 — 15 out of 34,694 results