Filters








24,845 Hits in 5.0 sec

A Theoretical Analysis of Contrastive Unsupervised Representation Learning [article]

Sanjeev Arora, Hrishikesh Khandeparkar, Mikhail Khodak, Orestis Plevrakis, Nikunj Saunshi
2019 arXiv   pre-print
The current paper uses the term contrastive learning for such algorithms and presents a theoretical framework for analyzing them by introducing latent classes and hypothesizing that semantically similar  ...  This framework allows us to show provable guarantees on the performance of the learned representations on the average classification task that is comprised of a subset of the same set of latent classes  ...  our theoretical analysis.  ... 
arXiv:1902.09229v1 fatcat:b3n6bvuuevfzxdkiz3jz2yv4ga

Empirical Evaluation and Theoretical Analysis for Representation Learning: A Survey [article]

Kento Nozawa, Issei Sato
2022 arXiv   pre-print
To understand the current representation learning, we review evaluation methods of representation learning algorithms and theoretical analyses.  ...  Recently, extracted feature representations by a representation learning algorithm and a simple predictor have exhibited state-of-the-art performance on several machine learning tasks.  ...  A Experimental Settings We trained a neural network with a fully connected hidden layer on the training dataset of MNIST.  ... 
arXiv:2204.08226v1 fatcat:bw6orocgjjb2niyttv6g6hivba

Learning Robust Representation through Graph Adversarial Contrastive Learning [article]

Jiayan Guo, Shangyang Li, Yue Zhao, Yan Zhang
2022 arXiv   pre-print
To improve the robustness of graph representation learning, we propose a novel Graph Adversarial Contrastive Learning framework (GraphACL) by introducing adversarial augmentations into graph self-supervised  ...  Based on the Information Bottleneck Principle, we theoretically prove that our method could obtain a much tighter bound, thus improving the robustness of graph representation learning.  ...  Algorithm 1 The Procedure of One Iteration in GraphACL Require: Input Graph G = (A, X); Ensure: f1(•), f2 Theoretical Analysis on Graph Adversarial Contrastive Learning In this section, we first formulate  ... 
arXiv:2201.13025v1 fatcat:lhi5svladzbrpjn77ofvd6rstm

Information-based Disentangled Representation Learning for Unsupervised MR Harmonization [article]

Lianrui Zuo, Blake E. Dewey, Aaron Carass, Yihao Liu, Yufan He, Peter A. Calabresi, Jerry L. Prince
2021 arXiv   pre-print
In this work, we propose an unsupervised MR harmonization framework, CALAMITI (Contrast Anatomy Learning and Analysis for MR Intensity Translation and Integration), based on information bottleneck theory  ...  CALAMITI learns a disentangled latent space using a unified structure for multi-site harmonization without the need for traveling subjects.  ...  CALAMITI is an improved, theoretically grounded, unsupervised harmonization approach based on an information bottleneck (IB) [22] that learns a global, disentangled latent space of anatomical and contrast  ... 
arXiv:2103.13283v1 fatcat:25l2fcymyba2pb4t3euokyvwdu

Foundations of Unsupervised Learning (Dagstuhl Seminar 16382)

Maria-Florina Balcan, Shai Ben-David, Ruth Urner, Ulrike Von Luxburg, Marc Herbstritt
2017 Dagstuhl Reports  
This report documents the program and the outcomes of Dagstuhl Seminar 16382 "Foundations of Unsupervised Learning". Unsupervised learning techniques are frequently used in practice of data analysis.  ...  The goal of the seminar was to initiate a broader and more systematic research on the foundations of unsupervised learning with the ultimate aim to provide more support to practitioners.  ...  However, in contrast to the well-developed theory of supervised learning, currently systematic analysis of unsupervised learning tasks is scarce and our understanding of the subject is rather meager.  ... 
doi:10.4230/dagrep.6.9.94 dblp:journals/dagstuhl-reports/BalcanBUL16 fatcat:gliqlrxzyrbzffssk5t3udw54q

Contrastive Learning for Representation Degeneration Problem in Sequential Recommendation [article]

Ruihong Qiu, Zi Huang, Hongzhi Yin, Zijian Wang
2021 arXiv   pre-print
Specifically, in light of the uniformity property of contrastive learning, a contrastive regularization is designed for DuoRec to reshape the distribution of sequence representations.  ...  In this paper, both empirical and theoretical investigations of this representation degeneration problem are first provided, based on which a novel recommender model DuoRec is proposed to improve the item  ...  The empirical observation and the theoretical analysis are provided.  ... 
arXiv:2110.05730v1 fatcat:6vwxwzylgbhsbc7op7sooskzcm

Towards Explanation for Unsupervised Graph-Level Representation Learning [article]

Qinghua Zheng, Jihong Wang, Minnan Luo, Yaoliang Yu, Jundong Li, Lina Yao, Xiaojun Chang
2022 arXiv   pre-print
Experimental results on both synthetic and real-world datasets demonstrate the superiority of our developed explainer and the validity of our theoretical analysis.  ...  In this paper, we advance the Information Bottleneck principle (IB) to tackle the proposed explanation problem for unsupervised graph representations, which leads to a novel principle, Unsupervised Subgraph  ...  Theoretical analysis In this section, we theoretically analyze the connection between representations and explanatory subgraphs on the label space.  ... 
arXiv:2205.09934v1 fatcat:7eos7e37uvdu5jgpkyhbqlz54u

Collaborative Graph Contrastive Learning: Data Augmentation Composition May Not be Necessary for Graph Representation Learning [article]

Yuxiang Ren, Jiawei Zhang
2021 arXiv   pre-print
Unsupervised graph representation learning is a non-trivial topic for graph data.  ...  The success of contrastive learning and self-supervised learning in the unsupervised representation learning of structured data inspires similar attempts on the graph.  ...  We provide the detailed analysis of this insight in Section 3.4 in a theoretical way. Batch-wise contrastive loss.  ... 
arXiv:2111.03262v1 fatcat:mkjcp3ya2vhvtlcazive7k32qi

Joint Contrastive Learning for Unsupervised Domain Adaptation [article]

Changhwa Park, Jonghyun Lee, Jaeyoon Yoo, Minhoe Hur, Sungroh Yoon
2020 arXiv   pre-print
With the theoretical analysis, we suggest a joint optimization framework that combines the source and target domains.  ...  With a solid theoretical framework, JCL employs contrastive loss to maximize the mutual information between a feature and its label, which is equivalent to maximizing the Jensen-Shannon divergence between  ...  Preliminary We first introduce the setting of unsupervised domain adaptation and previous theoretical analysis [2] .  ... 
arXiv:2006.10297v1 fatcat:j47ej36ttrgu5cknrct4yf3doq

Self-Supervised Graph Representation Learning via Topology Transformations [article]

Xiang Gao, Wei Hu, Guo-Jun Qi
2021 arXiv   pre-print
We present the Topology Transformation Equivariant Representation learning, a general paradigm of self-supervised learning for node representations of graph data to enable the wide applicability of Graph  ...  Then, we self-train a representation encoder to learn node representations by reconstructing the topology transformations from the feature representations of the original and transformed graphs.  ...  In contrast, GraphTER focuses on learning equivariant representations of nodes under node-1.  ... 
arXiv:2105.11689v2 fatcat:g5knitch3vhm7j56la73wx6kaa

Category Contrast for Unsupervised Domain Adaptation in Visual Tasks [article]

Jiaxing Huang, Dayan Guan, Aoran Xiao, Shijian Lu, Ling Shao
2022 arXiv   pre-print
Instance contrast for unsupervised representation learning has achieved great success in recent years.  ...  In this work, we explore the idea of instance contrastive learning in unsupervised domain adaptation (UDA) and propose a novel Category Contrast technique (CaCo) that introduces semantic priors on top  ...  ) and Nanyang Technological University (NTU) that is supported by A*STAR under its Industry Alignment Fund (LOA Award number: I1701E0013).  ... 
arXiv:2106.02885v3 fatcat:6nthlfqbujhgfdlhye4ufmmyne

Robust Unsupervised Graph Representation Learning via Mutual Information Maximization [article]

Jihong Wang, Minnan Luo, Jundong Li, Ziqi Liu, Jun Zhou, Qinghua Zheng
2022 arXiv   pre-print
Therefore, this paper focuses on robust unsupervised graph representation learning.  ...  Moreover, we theoretically establish a connection between our proposed GRR measure and the robustness of downstream classifiers, which reveals that GRR can provide a lower bound to the adversarial risk  ...  Unsupervised Graph Representation Learning Unsupervised graph representation learning aims to learn low-dimensional representations for nodes in a graph.  ... 
arXiv:2201.08557v1 fatcat:qphgm7jms5bmbamd27yo7obcre

Geometric Graph Representation Learning via Maximizing Rate Reduction [article]

Xiaotian Han, Zhimeng Jiang, Ninghao Liu, Qingquan Song, Jundong Li, Xia Hu
2022 arXiv   pre-print
Existing graph representation learning methods (e.g., based on random walk and contrastive learning) are limited to maximizing the local similarity of connected nodes.  ...  To this end, we propose Geometric Graph Representation Learning (G2R) to learn node representations in an unsupervised manner via maximizing rate reduction.  ...  The views and conclusions contained in this paper are those of the authors and should not be interpreted as representing any funding agencies.  ... 
arXiv:2202.06241v1 fatcat:hyd6lcbbwjb2tcwicuopsm4qr4

Deep Graph Contrastive Representation Learning [article]

Yanqiao Zhu, Yichen Xu, Feng Yu, Qiang Liu, Shu Wu, Liang Wang
2020 arXiv   pre-print
Inspired by recent success of contrastive methods, in this paper, we propose a novel framework for unsupervised graph representation learning by leveraging a contrastive objective at the node level.  ...  Specifically, we generate two graph views by corruption and learn node representations by maximizing the agreement of node representations in these two views.  ...  This work is jointly supported by National Key Research and Development Program (2018YFB1402600, 2016YFB1001000) and National Natural Science Foundation of China (U19B2038, 61772528).  ... 
arXiv:2006.04131v2 fatcat:zdnorcmxmfcvxoukjbndaj5tce

Learning Weakly-Supervised Contrastive Representations [article]

Yao-Hung Hubert Tsai, Tianqin Li, Weixin Liu, Peiyuan Liao, Ruslan Salakhutdinov, Louis-Philippe Morency
2022 arXiv   pre-print
Third, we show that our approach also works well with unsupervised constructed clusters (e.g., no auxiliary information), resulting in a strong unsupervised representation learning approach.  ...  With this intuition, we present a two-stage weakly-supervised contrastive learning approach. The first stage is to cluster data according to its auxiliary information.  ...  Any opinions, findings, conclusions, or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the sponsors, and no official endorsement should  ... 
arXiv:2202.06670v2 fatcat:bvfndct2f5awbh37mtd2ylnlba
« Previous Showing results 1 — 15 out of 24,845 results