A Survey of Pretraining on Graphs: Taxonomy, Methods, and Applications release_sybelun6trbklhwddnv64n7jym

by Jun Xia, Yanqiao Zhu, Yuanqi Du, Stan Z. Li

Published by arXiv.

2022  

Abstract

Pretrained Language Models (PLMs) such as BERT have revolutionized the landscape of Natural Language Processing (NLP). Inspired by their proliferation, tremendous efforts have been devoted to Pretrained Graph Models (PGMs). Owing to the powerful model architectures of PGMs, abundant knowledge from massive labeled and unlabeled graph data can be captured. The knowledge implicitly encoded in model parameters can benefit various downstream tasks and help to alleviate several fundamental issues of learning on graphs. In this paper, we provide the first comprehensive survey for PGMs. We firstly present the limitations of graph representation learning and thus introduce the motivation for graph pre-training. Then, we systematically categorize existing PGMs based on a taxonomy from four different perspectives. Next, we present the applications of PGMs in social recommendation and drug discovery. Finally, we outline several promising research directions that can serve as a guideline for future research.
In text/plain format

Archived Files and Locations

application/pdf   196.1 kB
file_rmsjxt2nmvc27a3zishvxgctky
arxiv.org (repository)
web.archive.org (webarchive)
Read Archived PDF
Preserved and Accessible
Type  article
Stage   published
Date   2022-02-01
Version   1
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: 056075fe-ccce-41e8-bd52-373606932a35
API URL: JSON