A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2022; you can also visit the original URL.
The file type is application/pdf
.
Pre-Train Your Loss: Easy Bayesian Transfer Learning with Informative Priors
[article]
2022
arXiv
pre-print
Deep learning is increasingly moving towards a transfer learning paradigm whereby large foundation models are fine-tuned on downstream tasks, starting from an initialization learned on the source task. But an initialization contains relatively little information about the source task. Instead, we show that we can learn highly informative posteriors from the source task, through supervised or self-supervised approaches, which then serve as the basis for priors that modify the whole loss surface
arXiv:2205.10279v1
fatcat:kh5t6rp3bbeslhia5avovcpo5a