SeCo: Exploring Sequence Supervision for Unsupervised Representation Learning [article]

Ting Yao and Yiheng Zhang and Zhaofan Qiu and Yingwei Pan and Tao Mei
2021 arXiv   pre-print
A steady momentum of innovations and breakthroughs has convincingly pushed the limits of unsupervised image representation learning. Compared to static 2D images, video has one more dimension (time). The inherent supervision existing in such sequential structure offers a fertile ground for building unsupervised learning models. In this paper, we compose a trilogy of exploring the basic and generic supervision in the sequence from spatial, spatiotemporal and sequential perspectives. We
more » ... e the supervisory signals through determining whether a pair of samples is from one frame or from one video, and whether a triplet of samples is in the correct temporal order. We uniquely regard the signals as the foundation in contrastive learning and derive a particular form named Sequence Contrastive Learning (SeCo). SeCo shows superior results under the linear protocol on action recognition (Kinetics), untrimmed activity recognition (ActivityNet) and object tracking (OTB-100). More remarkably, SeCo demonstrates considerable improvements over recent unsupervised pre-training techniques, and leads the accuracy by 2.96 ImageNet pre-training in action recognition task on UCF101 and HMDB51, respectively. Source code is available at .
arXiv:2008.00975v2 fatcat:eb3hnhbuqfhjvhquax6oro4kei