Filters








398,499 Hits in 2.9 sec

Contrastive Code Representation Learning [article]

Paras Jain, Ajay Jain, Tianjun Zhang, Pieter Abbeel, Joseph E. Gonzalez, Ion Stoica
2021 arXiv   pre-print
Recent work learns contextual representations of source code by reconstructing tokens from their context.  ...  We propose ContraCode: a contrastive pre-training task that learns code functionality, not form.  ...  The augmentations discussed in Section 3.1 enable adapting recent contrastive learning objectives for images to code representation learning.  ... 
arXiv:2007.04973v3 fatcat:bpqzjwtoebhh3in5q7qkk4sggq

Representation Learning with Contrastive Predictive Coding [article]

Aaron van den Oord, Yazhe Li, Oriol Vinyals
2019 arXiv   pre-print
In this work, we propose a universal unsupervised learning approach to extract useful representations from high-dimensional data, which we call Contrastive Predictive Coding.  ...  While most prior work has focused on evaluating representations for a particular modality, we demonstrate that our approach is able to learn useful representations achieving strong performance on four  ...  Predictive Coding, the proposed representation learning approach.  ... 
arXiv:1807.03748v2 fatcat:l7bjdp4x7bdg3pprz42rzi3zqq

Self-Supervised Learning of Remote Sensing Scene Representations Using Contrastive Multiview Coding [article]

Vladan Stojnić
2021 arXiv   pre-print
In recent years self-supervised learning has emerged as a promising candidate for unsupervised representation learning.  ...  In this work, we conduct an extensive analysis of the applicability of self-supervised learning in remote sensing image classification.  ...  Self-supervised pre-training As the first step in our experiments, we use Contrastive Multiview Coding -CMC [36] for self-supervised pretraining.  ... 
arXiv:2104.07070v2 fatcat:irqhsya535galjojxuy3dn57ci

HELoC: Hierarchical Contrastive Learning of Source Code Representation [article]

Xiao Wang, Qiong Wu, Hongyu Zhang, Chen Lyu, Xue Jiang, Zhuoran Zheng, Lei Lyu, Songlin Hu
2022 arXiv   pre-print
In this paper, we propose HELoC, a hierarchical contrastive learning model for source code representation.  ...  syntax trees (ASTs) play a crucial role in source code representation.  ...  In this paper, we present HELoC (short for Hierarchical Con-strastivE Learning of Code), a hierarchical contrastive learning model for source code representation.  ... 
arXiv:2203.14285v1 fatcat:ulclwu5e3ffxtivec76hblhosu

Contrastive Predictive Coding Supported Factorized Variational Autoencoder for Unsupervised Learning of Disentangled Speech Representations [article]

Janek Ebbers, Michael Kuhlmann, Tobias Cord-Landwehr, Reinhold Haeb-Umbach
2021 arXiv   pre-print
To foster disentanglement, we propose adversarial contrastive predictive coding. This new disentanglement method does neither need parallel data nor any supervision.  ...  We further demonstrate an increased robustness of the content representation against a train-test mismatch compared to spectral features, when used for phone recognition.  ...  CONTRASTIVE PREDICTIVE CODING SUPPORTED FACTORIZED VARIATIONAL AUTOENCODER To learn disentangled representations of style and content, we propose a fully convolutional FVAE which is illustrated in Fig  ... 
arXiv:2005.12963v2 fatcat:qznumov53bau7ptmntlh3g74le

Unsupervised Speech Segmentation and Variable Rate Representation Learning using Segmental Contrastive Predictive Coding [article]

Saurabhchand Bhati, Jesús Villalba, Piotr Żelasko, Laureano Moro-Velazquez, Najim Dehak
2021 arXiv   pre-print
A convolutional neural network learns frame-level representation from the raw waveform via noise-contrastive estimation (NCE).  ...  Recent attempts employ self-supervised learning, such as contrastive predictive coding (CPC), where the next frame is predicted given past context.  ...  Recent attempts employ self-supervised learning, such as contrastive predictive coding (CPC), where the next frame is predicted given past context.  ... 
arXiv:2110.02345v2 fatcat:ecmpikwm5bggjj72alu57thhea

Automatic coding of students' writing via Contrastive Representation Learning in the Wasserstein space [article]

Ruijie Jiang, Julia Gouvea, David Hammer, Eric Miller, Shuchin Aeron
2020 arXiv   pre-print
learning set-up.  ...  Using this set of labeled data, we show that a popular natural language modeling processing pipeline, namely vector representation of words, a.k.a word embeddings, followed by Long Short Term Memory (LSTM  ...  Given this set-up, our approach is to learn useful representations via contrastive learning using the triplet loss [21, 22] .  ... 
arXiv:2011.13384v2 fatcat:ybywlsghfrcpzhlgkxj7aajfaq

Active Contrastive Learning of Audio-Visual Video Representations [article]

Shuang Ma, Zhaoyang Zeng, Daniel McDuff, Yale Song
2021 arXiv   pre-print
Contrastive learning has been shown to produce generalizable representations of audio and visual data by maximizing the lower bound on the mutual information (MI) between different views of an instance  ...  [Code is available at: ]  ...  APPROACH We present cross-modal active contrastive coding (CM-ACC) to learn audio-visual representations from unlabeled videos. Fig. 1 highlights the main idea of our approach.  ... 
arXiv:2009.09805v2 fatcat:oybaqjvq6fefbbshaclckrqc6a

SynCoBERT: Syntax-Guided Multi-Modal Contrastive Pre-Training for Code Representation [article]

Xin Wang, Yasheng Wang, Fei Mi, Pingyi Zhou, Yao Wan, Xiao Liu, Li Li, Hao Wu, Jin Liu, Xin Jiang
2021 arXiv   pre-print
Code representation learning, which aims to encode the semantics of source code into distributed vectors, plays an important role in recent deep-learning-based models for code intelligence.  ...  To further explore the properties of programming languages, this paper proposes SynCoBERT, a syntax-guided multi-modal contrastive pre-training approach for better code representations.  ...  We propose a multi-modal contrastive learning (MCL) objective to obtain more comprehensive representations by learning from three modalities (code, comment, and AST) through contrastive learning.  ... 
arXiv:2108.04556v3 fatcat:vhp2fpfkpnh7lpfomyn5yqwyba

Self-Supervised Learning for Code Retrieval and Summarization through Semantic-Preserving Program Transformations [article]

Nghi D. Q. Bui, Yijun Yu, Lingxiao Jiang
2021 arXiv   pre-print
To address these limitations, we are proposing Corder, a self-supervised contrastive learning framework that trains code representation models on unlabeled data.  ...  The key innovation is that we train the source code model by asking it to recognize similar and dissimilar code snippets through a contrastive learning paradigm.  ...  In this case, it should map˜and˜into two code vectors and , respectively. • A contrastive loss function is defined for the contrastive learning task.  ... 
arXiv:2009.02731v5 fatcat:sdyhezkr4rhmbpmgoyssrtiu5i

Contrastive Code Representation Learning

Paras Jain, Ajay Jain, Tianjun Zhang, Pieter Abbeel, Joseph Gonzalez, Ion Stoica
2021 Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing   unpublished
A simple framework ference on Learning Representations. for contrastive learning of visual representations.  ...  Contrastive Code Representation Learning Paras Jain∗ and Ajay Jain∗ and Tianjun Zhang and Pieter Abbeel and Joseph E.  ... 
doi:10.18653/v1/2021.emnlp-main.482 fatcat:ojvnpzix5fc4rffg6tqxg3epza

Knowledge Representation Learning with Contrastive Completion Coding

Bo Ouyang, Wenbing Huang, Runfa Chen, Zhixing Tan, Yang Liu, Maosong Sun, Jihong Zhu
2021 Findings of the Association for Computational Linguistics: EMNLP 2021   unpublished
Knowledge representation learning (KRL) has been used in plenty of knowledge-driven tasks.  ...  In this paper, we propose Contrastive Completion Coding (C 3 ), a novel KRL framework that is composed of two functional components: 1.  ...  Contrastive Loss Contrastive losses measure the distance, or similarity, between representations in the latent space, which is one of the key differences between contrastive learning methods and other  ... 
doi:10.18653/v1/2021.findings-emnlp.263 fatcat:nmb4pp7vqvcbtpn225yjxvmm5a

KnowAugNet: Multi-Source Medical Knowledge Augmented Medication Prediction Network with Multi-Level Graph Contrastive Learning [article]

Yang An, Bo Jin, Xiaopeng Wei
2022 arXiv   pre-print
via multi-level graph contrastive learning framework.  ...  Specifically, KnowAugNet first leverages the graph contrastive learning using graph attention network as the encoder to capture the implicit relations between homogeneous medical codes from the medical  ...  incorporate the unsupervised graph contrastive learning method based on DGI [30] to learn the medical codes representations by maximizing the mutual information of graph representation for the first  ... 
arXiv:2204.11736v2 fatcat:vvncqo2tejgwrbpo2cxpstqdt4

CodeRetriever: Unimodal and Bimodal Contrastive Learning [article]

Xiaonan Li, Yeyun Gong, Yelong Shen, Xipeng Qiu, Hang Zhang, Bolun Yao, Weizhen Qi, Daxin Jiang, Weizhu Chen, Nan Duan
2022 arXiv   pre-print
In this paper, we propose the CodeRetriever model, which combines the unimodal and bimodal contrastive learning to train function-level code semantic representations, specifically for the code search task  ...  For bimodal contrastive learning, we leverage the documentation and in-line comments of code to build text-code pairs.  ...  It shows that the unimodal contrastive learning helps to learn a unified representation space of code with different programming languages.  ... 
arXiv:2201.10866v1 fatcat:zywntl7z3zhuleci7ic4ymegru

Few-Shot Electronic Health Record Coding through Graph Contrastive Learning [article]

Shanshan Wang, Pengjie Ren, Zhumin Chen, Zhaochun Ren, Huasheng Liang, Qiang Yan, Evangelos Kanoulas, Maarten de Rijke
2021 arXiv   pre-print
We seek to improve the performance for both frequent and rare ICD codes by using a contrastive graph-based EHR coding framework, CoGraph, which re-casts EHR coding as a few-shot learning task.  ...  To mitigate this risk, CoGraph devises two graph contrastive learning schemes, GSCL and GECL, that exploit the HEWE graph structures so as to encode transferable features.  ...  Graph contrastive learning extends the paradigm to representation learning on graphs. Peng et al.  ... 
arXiv:2106.15467v1 fatcat:3tnzxc6qgzduli3fcmepxwi3ti
« Previous Showing results 1 — 15 out of 398,499 results