Filters








11 Hits in 8.1 sec

AdCo: Adversarial Contrast for Efficient Learning of Unsupervised Representations from Self-Trained Negative Adversaries [article]

Qianjiang Hu, Xiao Wang, Wei Hu, Guo-Jun Qi
2021 arXiv   pre-print
Alternatively, we present to directly learn a set of negative adversaries playing against the self-trained representation.  ...  Contrastive learning relies on constructing a collection of negative examples that are sufficiently hard to discriminate against positive queries when their representations are self-trained.  ...  Appendix for "AdCo: Adversarial Contrast for Efficient Learning of Unsupervised Representations from Self-Trained Negative Adversaries" In this appendix, we further analyze the impact of several factors  ... 
arXiv:2011.08435v5 fatcat:ld3hqxixibd2zfx7ih3envre4u

CaCo: Both Positive and Negative Samples are Directly Learnable via Cooperative-adversarial Contrastive Learning [article]

Xiao Wang, Yuhang Huang, Dan Zeng, Guo-Jun Qi
2022 arXiv   pre-print
As a representative self-supervised method, contrastive learning has achieved great successes in unsupervised training of representations.  ...  This yields cooperative positives and adversarial negatives with respect to the encoder, which are updated to continuously track the learned representation of the query anchors over mini-batches.  ...  For the adversarial negative example training, AdCo [8] has carefully discussed it and utilized it for contrastive learning successfully.  ... 
arXiv:2203.14370v1 fatcat:qv2sbblxxbgrvh3istgxidekq4

Can contrastive learning avoid shortcut solutions? [article]

Joshua Robinson, Li Sun, Ke Yu, Kayhan Batmanghelich, Stefanie Jegelka, Suvrit Sra
2021 arXiv   pre-print
The generalization of representations learned via contrastive learning depends crucially on what features of the data are extracted.  ...  In response, we propose implicit feature modification (IFM), a method for altering positive and negative samples in order to guide contrastive models towards capturing a wider variety of predictive features  ...  Training We use the SimCLR framework with a ResNet-18 backbone and train for 1000 epochs. We use a base learning rate of 5 with cosine annealing scheduling and batch size 512. LARS optimizer is used.  ... 
arXiv:2106.11230v3 fatcat:3zsvbtgq4veozcwzvxz2lqom6i

Unsupervised Domain Generalization by Learning a Bridge Across Domains [article]

Sivan Harary, Eli Schwartz, Assaf Arbelle, Peter Staar, Shady Abu-Hussein, Elad Amrani, Roei Herzig, Amit Alfassy, Raja Giryes, Hilde Kuehne, Dina Katabi, Kate Saenko (+2 others)
2022 arXiv   pre-print
The BrAD and mappings to it are learned jointly (end-to-end) with a contrastive self-supervised representation model that semantically aligns each of the domains to its BrAD-projection, and hence implicitly  ...  Our approach is based on self-supervised learning of a Bridge Across Domains (BrAD) - an auxiliary bridge domain accompanied by a set of semantics preserving visual (image-to-image) mappings to BrAD from  ...  The BrAD is used only during contrastive self-supervised training of our model for semantically aligning the representations (features) of each of the training domains to the ones for the shared BrAD.  ... 
arXiv:2112.02300v2 fatcat:ho3ud7rrsrddpiovqwagc75hpq

Towards Unsupervised Domain Generalization [article]

Xingxuan Zhang, Linjun Zhou, Renzhe Xu, Peng Cui, Zheyan Shen, Haoxin Liu
2022 arXiv   pre-print
Specifically, we study a novel generalization problem called unsupervised domain generalization (UDG), which aims to learn generalizable models with unlabeled data and analyze the effects of pre-training  ...  Since unlabeled data are far more accessible, we seek to explore how unsupervised learning can help deep models generalize across domains.  ...  U1936219, 61521002, 61772304), Beijing Academy of Artificial Intelligence (BAAI), and a grant from the Institute for Guo Qiang, Tsinghua University.  ... 
arXiv:2107.06219v2 fatcat:odtceffjtrd7zbyyfyobvxyn4a

Dense Contrastive Visual-Linguistic Pretraining [article]

Lei Shi, Kai Shuang, Shijie Geng, Peng Gao, Zuohui Fu, Gerard de Melo, Yunpeng Chen, Sen Su
2021 arXiv   pre-print
Two data augmentation strategies (Mask Perturbation and Intra-/Inter-Adversarial Perturbation) are developed to improve the quality of negative samples used in contrastive learning.  ...  Overall, DCVLP allows cross-modality dense region contrastive learning in a self-supervised setting independent of any object annotations.  ...  RoI-Feature Dense Contrastive Loss for Visual Branch Contrastive learning performs self-supervised representation learning by discriminating visually similar representation pairs from a group of negative  ... 
arXiv:2109.11778v1 fatcat:sown4wcp45c5dpizfyrcrnsks4

Self-Contrastive Learning with Hard Negative Sampling for Self-supervised Point Cloud Learning [article]

Bi'an Du, Xiang Gao, Wei Hu, Xin Li
2021 arXiv   pre-print
Such self-contrastive learning is well aligned with the emerging paradigm of self-supervised learning for point cloud analysis.  ...  point cloud as positive samples and otherwise negative ones to facilitate the task of contrastive learning.  ...  AdCo [33] presents an adversarial approach to demonstrate the negative pairs of examples can be directly trained end-to-end together with the backbone network so that the contrastive model can be learned  ... 
arXiv:2107.01886v1 fatcat:apwif67eijf4jljcoaqboomlve

Large-scale Unsupervised Semantic Segmentation [article]

Shanghua Gao and Zhong-Yu Li and Ming-Hsuan Yang and Ming-Ming Cheng and Junwei Han and Philip Torr
2022 arXiv   pre-print
There are two major challenges to allowing such an attractive learning modality for segmentation tasks: i) a large-scale benchmark for assessing algorithms is missing; ii) unsupervised category/shape representation  ...  Powered by the ImageNet dataset, unsupervised learning on large-scale data has made significant advances for classification tasks.  ...  Acknowledgement Thanks for part of the pixel-level annotation from the Learning from Imperfect Data Challenge [145] .  ... 
arXiv:2106.03149v2 fatcat:f2q34ku5znarhmcnnrvptstapy

Relational Self-Supervised Learning [article]

Mingkai Zheng, Shan You, Fei Wang, Chen Qian, Changshui Zhang, Xiaogang Wang, Chang Xu
2022 arXiv   pre-print
Self-supervised Learning (SSL) including the mainstream contrastive learning has achieved great success in learning visual representations without data annotations.  ...  In this paper, we introduce a novel SSL paradigm, which we term as relational self-supervised learning (ReSSL) framework that learns representations by modeling the relationship between different instances  ...  AdCo [42] shows that a set of negative features can replace the negative samples, and these features can be adversarially learned by maximizing the contrastive loss.  ... 
arXiv:2203.08717v1 fatcat:ocpgojeqfjfhdhkpo5amge2q6i

On Feature Decorrelation in Self-Supervised Learning [article]

Tianyu Hua, Wenxiao Wang, Zihui Xue, Sucheng Ren, Yue Wang, Hang Zhao
2021 arXiv   pre-print
In self-supervised representation learning, a common idea behind most of the state-of-the-art approaches is to enforce the robustness of the representations to predefined augmentations.  ...  The gains from feature decorrelation are verified empirically to highlight the importance and the potential of this insight.  ...  Large-batch optimizers such as LARS [50] are commonly used in self-supervised contrastive pre-training for visual representation learning [8, 18, 7] .  ... 
arXiv:2105.00470v2 fatcat:oph5osy5e5cndchu7vgmkzkr7m

Can contrastive learning avoid shortcut solutions?

Joshua Robinson, Li Sun, Ke Yu, Kayhan Batmanghelich, Stefanie Jegelka, Suvrit Sra
2021
The generalization of representations learned via contrastive learning depends crucially on what features of the data are extracted.  ...  In response, we propose implicit feature modification (IFM), a method for altering positive and negative samples in order to guide contrastive models towards capturing a wider variety of predictive features  ...  Acknowledgments We warmly thank Katherine Hermann and Andrew Lampinen for sharing the Trifeature dataset.  ... 
pmid:35546903 pmcid:PMC9089441 fatcat:npkoro2b7bg5pkam2wrpxool2a