A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2021; you can also visit the original URL.
The file type is application/pdf
.
Adaptive Risk Minimization: Learning to Adapt to Domain Shift
[article]
2021
arXiv
pre-print
A fundamental assumption of most machine learning algorithms is that the training and test data are drawn from the same underlying distribution. However, this assumption is violated in almost all practical applications: machine learning systems are regularly tested under distribution shift, due to changing temporal correlations, atypical end users, or other factors. In this work, we consider the problem setting of domain generalization, where the training data are structured into domains and
arXiv:2007.02931v4
fatcat:2gjsygbev5cmhdryhrwijvfeju