A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Filters
Instance-level contrastive learning yields human brain-like representation without category-supervision
[article]
2020
bioRxiv
pre-print
Finally, this dataset reveals substantial representational structure in intermediate and late stages of the human visual system that is not accounted for by any model, whether self-supervised or category-supervised ...
This paper introduces a new self-supervised learning framework: instance-prototype contrastive learning (IPCL), and compares the internal representations learned by this model and other instance-level ...
Reflections of the environment in memory. Psychological science, 2(6):396-408, 1991. ...
doi:10.1101/2020.06.15.153247
fatcat:j54fe2rivnbwjp56h5w4qdnwou
Estimating Galactic Distances From Images Using Self-supervised Representation Learning
[article]
2021
arXiv
pre-print
We use a contrastive self-supervised learning framework to estimate distances to galaxies from their photometric images. ...
) that by fine-tuning our self-supervised representations using all available data labels in the Main Galaxy Sample of the Sloan Digital Sky Survey (SDSS), we outperform the state-of-the-art supervised ...
Self-supervised representations. After the contrastive learning phase, galaxies are passed through the encoder network to obtain their 128 dimensional contrastive loss vectors. ...
arXiv:2101.04293v1
fatcat:2kofn6dkibd6depuvislsx7574
ISD: Self-Supervised Learning by Iterative Similarity Distillation
[article]
2021
arXiv
pre-print
Recently, contrastive learning has achieved great results in self-supervised learning, where the main idea is to push two augmentations of an image (positive pairs) closer compared to other random images ...
Hence, we introduce a self supervised learning algorithm where we use a soft similarity for the negative images rather than a binary distinction between positive and negative pairs. ...
This relaxation lets self-supervised learning focus on what matters most in learning rich features rather than forcing an unnecessary constraint of no change at all, which is difficult to achieve. ...
arXiv:2012.09259v3
fatcat:botccr7vsnf7powqj7opff7umy
Revisiting Self-Supervised Visual Representation Learning
[article]
2019
arXiv
pre-print
We challenge a number of common practices in selfsupervised visual representation learning and observe that standard recipes for CNN design do not always translate to self-supervised representation learning ...
Unsupervised visual representation learning remains a largely unsolved problem in computer vision research. ...
Invertible units preserve all information learned in intermediate layers, and, thus, prevent deterioration of representation quality. ...
arXiv:1901.09005v1
fatcat:64vuo5jjwrexlpofatd463vkca
Revisiting Self-Supervised Visual Representation Learning
2019
2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
We challenge a number of common practices in selfsupervised visual representation learning and observe that standard recipes for CNN design do not always translate to self-supervised representation learning ...
Unsupervised visual representation learning remains a largely unsolved problem in computer vision research. ...
Invertible units preserve all information learned in intermediate layers, and, thus, prevent deterioration of representation quality. ...
doi:10.1109/cvpr.2019.00202
dblp:conf/cvpr/KolesnikovZB19
fatcat:hutezdahpndirit2ygw75wb6em
Point-Level Region Contrast for Object Detection Pre-Training
[article]
2022
arXiv
pre-print
In this work we present point-level region contrast, a self-supervised pre-training approach for the task of object detection. ...
Incorporating this perspective in pre-training, our approach performs contrastive learning by directly sampling individual point pairs from different regions. ...
Introduction Un-/self-supervised learning -in particular contrastive learning [6, 20, 24] -has recently arisen as a powerful tool to obtain visual representations that can potentially benefit from an ...
arXiv:2202.04639v2
fatcat:pp262pglibgd3ch6uqhgjvmcpa
Constrained Mean Shift Using Distant Yet Related Neighbors for Representation Learning
[article]
2021
arXiv
pre-print
We are interested in representation learning in self-supervised, supervised, or semi-supervised settings. ...
The prior work on applying mean-shift idea for self-supervised learning, MSF, generalizes the BYOL idea by pulling a query image to not only be closer to its other augmentation, but also to the nearest ...
Improved baselines with momentum contrastive learning. ...
arXiv:2112.04607v1
fatcat:n7g5f2obnzf6xjn6jpyjpp4ezu
Unsupervised Learning of Visual Features by Contrasting Cluster Assignments
[article]
2021
arXiv
pre-print
contrastive learning. ...
Unsupervised image representations have significantly reduced the gap with supervised pretraining, notably with the recent achievements of contrastive learning methods. ...
Method Our goal is to learn visual features in an online fashion without supervision. To that effect, we propose an online clustering-based self-supervised method. ...
arXiv:2006.09882v5
fatcat:36ckh5q5e5dq7d36xs52dg6swu
Rethinking Self-supervised Correspondence Learning: A Video Frame-level Similarity Perspective
[article]
2021
arXiv
pre-print
To learn generalizable representation for correspondence in large-scale, a variety of self-supervised pretext tasks are proposed to explicitly perform object-level or patch-level similarity learning. ...
Our work is inspired by the recent success in image-level contrastive learning and similarity learning for visual recognition. ...
This work was supported, in part, by grants from DARPA LwLL, NSF 1730158 CI-New: Cognitive Hardware and Software Ecosystem Community Infrastructure (CHASE-CI), NSF ACI-1541349 CC*DNI Pacific Research Platform ...
arXiv:2103.17263v5
fatcat:vl6wmapxyra5hczdprm57lnp7y
A Multi-Stage Attentive Transfer Learning Framework for Improving COVID-19 Diagnosis
[article]
2021
arXiv
pre-print
Importantly, we propose a novel self-supervised learning method to learn multi-scale representations for lung CT images. ...
Our results also show that the proposed self-supervised learning method outperforms several baseline methods. ...
The transferability of ATTNs and convolution layers on the self-supervised learning is then explored. ...
arXiv:2101.05410v1
fatcat:yalhwm25qncmhd2rminxxt74ci
DisCo: Remedy Self-supervised Learning on Lightweight Models with Distilled Contrastive Learning
[article]
2022
arXiv
pre-print
The current method mainly relies on contrastive learning to train the network and in this work, we propose a simple yet effective Distilled Contrastive Learning (DisCo) to ease the issue by a large margin ...
While self-supervised representation learning (SSL) has received widespread attention from the community, recent research argue that its performance will suffer a cliff fall when the model size decreases ...
Conclusion In this paper, we propose Distilled Contrastive Learning (DisCo) to remedy self-supervised learning on lightweight models. ...
arXiv:2104.09124v4
fatcat:w6jsbna25vdnlhnv7xmgciac3q
Beyond category-supervision: instance-level contrastive learning models predict human visual system responses to objects
[article]
2021
bioRxiv
pre-print
Here we present a fully self-supervised model which instead learns to represent individual images, where views of the same image are embedded nearby in a low-dimensional feature space, distinctly from ...
We find category information implicitly emerges in the feature space, and critically that these models achieve parity with category-supervised models in predicting the hierarchical structure of brain responses ...
Inspired by this instance-level supervised system, we developed a learning framework that is fully self-supervised, called instance prototype contrastive learning (IPCL), in which the goal is to learn ...
doi:10.1101/2021.05.28.446118
fatcat:mt47l7eq7najjjijp4klz6evfy
Temporal Context Matters: Enhancing Single Image Prediction with Disease Progression Representations
[article]
2022
arXiv
pre-print
Meanwhile, a Vision Transformer is pretrained in a self-supervised fashion to extract features from single-timepoint images. ...
In our method, a self-attention based Temporal Convolutional Network (TCN) is used to learn a representation that is most reflective of the disease trajectory. ...
Self-supervised learning approaches [11, 25] have made significant advances in recent years, improving the ability to learn image representations even from smaller datasets. ...
arXiv:2203.01933v2
fatcat:6yoej5d6sbbytnn2yvfbjpfmeu
Contrastive Code Representation Learning
[article]
2021
arXiv
pre-print
We propose ContraCode: a contrastive pre-training task that learns code functionality, not form. ...
Recent work learns contextual representations of source code by reconstructing tokens from their context. ...
In contrast, self-supervised learning can leverage large open-source repositories such as GitHub with
limited or no annotations. ...
arXiv:2007.04973v3
fatcat:bpqzjwtoebhh3in5q7qkk4sggq
What makes instance discrimination good for transfer learning?
[article]
2021
arXiv
pre-print
In this work, we investigate the following problems: What makes instance discrimination pretraining good for transfer learning? What knowledge is actually learned and transferred from these models? ...
Contrastive visual pretraining based on the instance discrimination pretext task has made significant progress. ...
Over the years, the research community has achieved significant progress on self-supervised learning (Doersch et al., 2015; Doersch & Zisserman, 2017; Zhang et al., 2016; Gidaris et al., 2018; and contrastive ...
arXiv:2006.06606v2
fatcat:gyleg63lbzfqpbkb2b3aryz63u
« Previous
Showing results 1 — 15 out of 4,103 results