Spatial Cross-Attention Improves Self-Supervised Visual Representation Learning [article]

Mehdi Seyfi, Amin Banitalebi-Dehkordi, Yong Zhang
2022 arXiv   pre-print
Unsupervised representation learning methods like SwAV are proved to be effective in learning visual semantics of a target dataset. The main idea behind these methods is that different views of a same image represent the same semantics. In this paper, we further introduce an add-on module to facilitate the injection of the knowledge accounting for spatial cross correlations among the samples. This in turn results in distilling intra-class information including feature level locations and cross
more » ... imilarities between same-class instances. The proposed add-on can be added to existing methods such as the SwAV. We can later remove the add-on module for inference without any modification of the learned weights. Through an extensive set of empirical evaluations, we verify that our method yields an improved performance in detecting the class activation maps, top-1 classification accuracy, and down-stream tasks such as object detection, with different configuration settings.
arXiv:2206.05028v1 fatcat:zfzelprk4fambjhrmxnmjrk7q4