A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
A Survey on Self-supervised Pre-training for Sequential Transfer Learning in Neural Networks
[article]
2020
arXiv
pre-print
Deep neural networks are typically trained under a supervised learning framework where a model learns a single task using labeled data. Instead of relying solely on labeled data, practitioners can harness unlabeled or related data to improve model performance, which is often more accessible and ubiquitous. Self-supervised pre-training for transfer learning is becoming an increasingly popular technique to improve state-of-the-art results using unlabeled data. It involves first pre-training a
arXiv:2007.00800v1
fatcat:jgjl2l7wqfaq5do4vre5fryuoe