A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2022; you can also visit the original URL.
The file type is application/pdf
.
Self-supervised Spatiotemporal Representation Learning by Exploiting Video Continuity
[article]
2022
arXiv
pre-print
Recent self-supervised video representation learning methods have found significant success by exploring essential properties of videos, e.g. speed, temporal order, etc. This work exploits an essential yet under-explored property of videos, the video continuity, to obtain supervision signals for self-supervised representation learning. Specifically, we formulate three novel continuity-related pretext tasks, i.e. continuity justification, discontinuity localization, and missing section
arXiv:2112.05883v3
fatcat:2ika4qbnyvcrlatdjz32sea7w4