Filters








794 Hits in 6.6 sec

Graph Barlow Twins: A self-supervised representation learning framework for graphs [article]

Piotr Bielak, Tomasz Kajdanowicz, Nitesh V. Chawla
2021 arXiv   pre-print
To overcome such limitations, we propose a framework for self-supervised graph representation learning -- Graph Barlow Twins, which utilizes a cross-correlation-based loss function instead of negative  ...  Despite the great success of SSL methods in computer vision and natural language processing, most of them employ contrastive learning objectives that require negative samples, which are hard to define.  ...  However, the Barlow Twins method can be interpreted in another way, as showed by (26) . The authors view it as an instance of the so-called negative-sample-free contrastive learning.  ... 
arXiv:2106.02466v1 fatcat:pjfjvdnv75c5jdnhemgzz26vgm

Barlow Twins: Self-Supervised Learning via Redundancy Reduction [article]

Jure Zbontar, Li Jing, Ishan Misra, Yann LeCun, Stéphane Deny
2021 arXiv   pre-print
Barlow Twins outperforms previous methods on ImageNet for semi-supervised classification in the low-data regime, and is on par with current state of the art for ImageNet classification with a linear classifier  ...  Barlow Twins does not require large batches nor asymmetry between the network twins such as a predictor network, gradient stopping, or a moving average on the weight updates.  ...  Acknowledgements We thank Pascal Vincent, Yubei Chen and Samuel Ocko for helpful insights on the mathematical connection to the infoNCE loss, Robert Geirhos and Adrien Bardes for extra analyses not included  ... 
arXiv:2103.03230v3 fatcat:trtjkpzcbjewrehdpzqdwim7ru

An Empirical Study of Graph Contrastive Learning [article]

Yanqiao Zhu, Yichen Xu, Qiang Liu, Shu Wu
2021 arXiv   pre-print
Graph Contrastive Learning (GCL) establishes a new paradigm for learning graph representations without human annotations.  ...  In this work, we first identify several critical design considerations within a general GCL paradigm, including augmentation functions, contrasting modes, contrastive objectives, and negative mining techniques  ...  Besides contrastive objectives that rely on negative samples, we experiment with three negative-sample-free objectives: Bootstrapping Latent (BL) loss, Barlow Twins (BT) loss, and VICReg loss.  ... 
arXiv:2109.01116v2 fatcat:kjfrkg26tbfxhoiomx2mdgao5y

GraphVICRegHSIC: Towards improved self-supervised representation learning for graphs with a hyrbid loss function [article]

Sayan Nag
2021 arXiv   pre-print
In thispaper, we have used a graph based self-supervisedlearning strategy with different loss functions (Bar-low Twins[Zbontaret al., 2021], HSIC[Tsaiet al.,2021], VICReg[Bardeset al., 2021]) which haveshown  ...  promising results when applied with CNNspreviously.  ...  Using such a negative-sample-free contrastive approach, the authors claimed that the representations learnt will be superior.  ... 
arXiv:2105.12247v4 fatcat:szfeecgx5nc35cnwkdbx5q7in4

Exploring the Equivalence of Siamese Self-Supervised Learning via A Unified Gradient Framework [article]

Chenxin Tao, Honghui Wang, Xizhou Zhu, Jiahua Dong, Shiji Song, Gao Huang, Jifeng Dai
2021 arXiv   pre-print
Various works are proposed to deal with self-supervised learning from different perspectives: (1) contrastive learning methods (e.g., MoCo, SimCLR) utilize both positive and negative samples to guide the  ...  methods (e.g., Barlow Twins, VICReg) instead aim to reduce the redundancy between feature dimensions.  ...  Based on the conclusion of [27] , our work builds a connection between asymmetric network with contrastive learning methods.  ... 
arXiv:2112.05141v1 fatcat:gimkzucforasbfgjbcnc2yeyse

Learning From Long-Tailed Data With Noisy Labels [article]

Shyamgopal Karthik and Jérome Revaud and Boris Chidlovskii
2021 arXiv   pre-print
In this work, we present a simple two-stage approach based on recent advances in self-supervised learning to treat both challenges simultaneously.  ...  There have been some recent attempts to tackle, on one side, the problem of learning from noisy labels and, on the other side, learning from long-tailed data.  ...  Earlier methods like SimCLR [9] , SimCLRv2 [10] and MoCo [18] , for instance, use negative samples and contrastive losses based on artificially constructed positive and negative pairs.  ... 
arXiv:2108.11096v2 fatcat:nd22q3eaurgsjcx7jghxwftsau

Biologically Plausible Training Mechanisms for Self-Supervised Learning in Deep Networks [article]

Mufeng Tang, Yibo Yang, Yali Amit
2021 arXiv   pre-print
The second is simply layer-wise learning, where each layer is directly connected to a layer computing the loss error.  ...  Furthermore we show that learning can be performed with one of two more plausible alternatives to backpropagation.  ...  More recently, a few new methods in self-supervised learning have been proposed to eliminate the need for negatives (therefore no contrast), including BYOL [30] , SimSiam [22] and Barlow Twins [31]  ... 
arXiv:2109.15089v3 fatcat:h6j5pw5dpnafzcvv5otzywjmii

Self-Supervised Learning with Kernel Dependence Maximization [article]

Yazhe Li and Roman Pogodin and Danica J. Sutherland and Arthur Gretton
2021 arXiv   pre-print
Our approach also gives us insight into BYOL, a negative-free SSL method, since SSL-HSIC similarly learns local neighborhoods of samples.  ...  Trained with or without a target network, SSL-HSIC matches the current state-of-the-art for standard linear evaluation on ImageNet, semi-supervised learning and transfer to other classification and vision  ...  Hénaff for valuable feedback on the manuscript and the help for evaluating object detection task. We thank Aaron Van den Oord and Oriol Vinyals for providing valuable feedback on the manuscript.  ... 
arXiv:2106.08320v2 fatcat:njm2dectlvgmxp7gnz5oohergq

Triplet is All You Need with Random Mappings for Unsupervised Visual Representation Learning [article]

Wenbin Li, Xuesong Yang, Meihao Kong, Lei Wang, Jing Huo, Yang Gao, Jiebo Luo
2021 arXiv   pre-print
Based on this observation, we propose a simple plug-in RandOm MApping (ROMA) strategy by randomly mapping samples into other spaces and enforcing these randomly projected samples to satisfy the same correlation  ...  However, this type of methods, such as SimCLR and MoCo, relies heavily on a large number of negative pairs and thus requires either large batches or memory banks.  ...  Recently, Barlow Twins [8] maximizes the similarity between two augmented (distorted) views of one image while reducing redundancy between their components, by relying on very high-dimensional representations  ... 
arXiv:2107.10419v2 fatcat:yx4u2jotbbhlpdsikn7yeewg2m

A hierarchical causal taxonomy of psychopathology across the life span

Benjamin B. Lahey, Robert F. Krueger, Paul J. Rathouz, Irwin D. Waldman, David H. Zald
2017 Psychological bulletin  
We propose a taxonomy of psychopathology based on patterns of shared causal influences identified in a review of multivariate behavior genetic studies that distinguish genetic and environmental influences  ...  We posit that these causal influences on psychopathology are moderated by sex and developmental processes.  ...  Shared Genetic Influences Over Time Although much remains to be learned, a longitudinal study of a large representative sample of twins found that adult ratings of externalizing problems at age 5 years  ... 
doi:10.1037/bul0000069 pmid:28004947 pmcid:PMC5269437 fatcat:yi42jgjkcrborh7qyhwg75hjli

From Canonical Correlation Analysis to Self-supervised Graph Neural Networks [article]

Hengrui Zhang, Qitian Wu, Junchi Yan, David Wipf, Philip S. Yu
2021 arXiv   pre-print
We introduce a conceptually simple yet effective model for self-supervised representation learning with graph data.  ...  Compared with other works, our approach requires none of the parameterized mutual information estimator, additional projector, asymmetric structures, and most importantly, negative samples which can be  ...  BGRL [39] is a recent endeavor on targeting a negative-sample-free approach for GNN learning through asymmetric architectures [12, 6] .  ... 
arXiv:2106.12484v2 fatcat:nfkfvqykvrgkzj67ri5qs63hby

Robust Contrastive Learning against Noisy Views [article]

Ching-Yao Chuang, R Devon Hjelm, Xin Wang, Vibhav Vineet, Neel Joshi, Antonio Torralba, Stefanie Jegelka, Yale Song
2022 arXiv   pre-print
Contrastive learning relies on an assumption that positive pairs contain related views, e.g., patches of an image or co-occurring multimodal signals of a video, that share certain underlying information  ...  We show that our approach provides consistent improvements over the state-of-the-art on image, video, and graph contrastive learning benchmarks that exhibit a variety of real-world noise patterns.  ...  We set the learning rate to 1e − 3 to finetune the models on downstream classification tasks such as UCF101 and HMDB51 with the provided evaluation code.  ... 
arXiv:2201.04309v1 fatcat:oyrvnmwcfbdndolh4kyyp3rmj4

A modern learning theory perspective on the etiology of panic disorder

Mark E. Bouton, Susan Mineka, David H. Barlow
2001 Psychological review  
Anxiety, in contrast, is functionally organized to help the organism prepare for a possible upcoming insult. It is more "forward looking" in this sense (Barlow, 1988 (Barlow, , 1991 .  ...  Although the investigators noted that the relationships were relatively weak in this sample of well-adjusted military recruits, accounting for a rather small percentage of the variance, this study helps  ... 
doi:10.1037/0033-295x.108.1.4 pmid:11212632 fatcat:y6v4563do5elrkfgquv37jzvcy

A modern learning theory perspective on the etiology of panic disorder

Mark E. Bouton, Susan Mineka, David H. Barlow
2001 Psychological review  
Anxiety, in contrast, is functionally organized to help the organism prepare for a possible upcoming insult. It is more "forward looking" in this sense (Barlow, 1988 (Barlow, , 1991 .  ...  Although the investigators noted that the relationships were relatively weak in this sample of well-adjusted military recruits, accounting for a rather small percentage of the variance, this study helps  ... 
doi:10.1037//0033-295x.108.1.4 fatcat:2al4jom7t5dopa3gjn33bliphe

Point Cloud Pre-training by Mixing and Disentangling [article]

Chao Sun, Zhedong Zheng, Xiaohan Wang, Mingliang Xu, Yi Yang
2021 arXiv   pre-print
We hope this self-supervised learning attempt on point clouds can pave the way for reducing the deeply-learned model dependence on large-scale labeled data and saving a lot of annotation costs in the future  ...  Point cloud pre-training is one potential solution for obtaining a scalable model for fast adaptation.  ...  Barlow Twins [52] does not use negatives pairs or momentum update.  ... 
arXiv:2109.00452v2 fatcat:zwjepvzakre2zb6pg4ennhsklm
« Previous Showing results 1 — 15 out of 794 results