A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2021; you can also visit <a rel="external noopener" href="https://arxiv.org/pdf/2104.14548v2.pdf">the original URL</a>. The file type is <code>application/pdf</code>.
With a Little Help from My Friends: Nearest-Neighbor Contrastive Learning of Visual Representations
[article]
<span title="2021-10-07">2021</span>
<i >
arXiv
</i>
<span class="release-stage" >pre-print</span>
Self-supervised learning algorithms based on instance discrimination train encoders to be invariant to pre-defined transformations of the same instance. While most methods treat different views of the same image as positives for a contrastive loss, we are interested in using positives from other instances in the dataset. Our method, Nearest-Neighbor Contrastive Learning of visual Representations (NNCLR), samples the nearest neighbors from the dataset in the latent space, and treats them as
<span class="external-identifiers">
<a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2104.14548v2">arXiv:2104.14548v2</a>
<a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/vlasnfthtrd2fbnzlanzqki4ae">fatcat:vlasnfthtrd2fbnzlanzqki4ae</a>
</span>
more »
... ives. This provides more semantic variations than pre-defined transformations. We find that using the nearest-neighbor as positive in contrastive losses improves performance significantly on ImageNet classification, from 71.7% to 75.6%, outperforming previous state-of-the-art methods. On semi-supervised learning benchmarks we improve performance significantly when only 1% ImageNet labels are available, from 53.8% to 56.5%. On transfer learning benchmarks our method outperforms state-of-the-art methods (including supervised learning with ImageNet) on 8 out of 12 downstream datasets. Furthermore, we demonstrate empirically that our method is less reliant on complex data augmentations. We see a relative reduction of only 2.1% ImageNet Top-1 accuracy when we train using only random crops.
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20211010001440/https://arxiv.org/pdf/2104.14548v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext">
<button class="ui simple right pointing dropdown compact black labeled icon button serp-button">
<i class="icon ia-icon"></i>
Web Archive
[PDF]
<div class="menu fulltext-thumbnail">
<img src="https://blobs.fatcat.wiki/thumbnail/pdf/2d/67/2d67415394aba2c834be722f7de02519842155d7.180px.jpg" alt="fulltext thumbnail" loading="lazy">
</div>
</button>
</a>
<a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2104.14548v2" title="arxiv.org access">
<button class="ui compact blue labeled icon button serp-button">
<i class="file alternate outline icon"></i>
arxiv.org
</button>
</a>