Self-Supervised Representation Learning: Introduction, Advances and Challenges [article]

Linus Ericsson, Henry Gouk, Chen Change Loy, Timothy M. Hospedales
2021 arXiv   pre-print
Self-supervised representation learning methods aim to provide powerful deep feature learning without the requirement of large annotated datasets, thus alleviating the annotation bottleneck that is one of the main barriers to practical deployment of deep learning today. These methods have advanced rapidly in recent years, with their efficacy approaching and sometimes surpassing fully supervised pre-training alternatives across a variety of data modalities including image, video, sound, text and
more » ... graphs. This article introduces this vibrant area including key concepts, the four main families of approach and associated state of the art, and how self-supervised methods are applied to diverse modalities of data. We further discuss practical considerations including workflows, representation transferability, and compute cost. Finally, we survey the major open challenges in the field that provide fertile ground for future work.
arXiv:2110.09327v1 fatcat:qoprtdh4rzg6lcylgn5rafubpe