Filters








5,187 Hits in 9.5 sec

GAIN: Graph Attention Interaction Network for Inductive Semi-Supervised Learning over Large-scale Graphs [article]

Yunpeng Weng and Xu Chen and Liang Chen and Wei Liu
2020 arXiv   pre-print
In this paper, we propose a novel graph neural network architecture, Graph Attention \& Interaction Network (GAIN), for inductive learning on graphs.  ...  Not only that, existing supervised or semi-supervised GNN models are trained based on the loss function of the node label, which leads to the neglect of graph structure information.  ...  To address this challenging problem, we propose a novel graph neural network model, Graph Attention & Interaction Network (GAIN).  ... 
arXiv:2011.01393v1 fatcat:ahcjc4bm5nhulhg3k5vpg5r3jm

Large-Scale Representation Learning on Graphs via Bootstrapping [article]

Shantanu Thakoor, Corentin Tallec, Mohammad Gheshlaghi Azar, Mehdi Azabou, Eva L. Dyer, Rémi Munos, Petar Veličković, Michal Valko
2021 arXiv   pre-print
Furthermore, we show that BGRL can be scaled up to extremely large graphs with hundreds of millions of nodes in the semi-supervised regime - achieving state-of-the-art performance and improving over supervised  ...  Self-supervised learning provides a promising path towards eliminating the need for costly label information in representation learning on graphs.  ...  . • We show that leveraging the scalability of BGRL allows making full use of the vast amounts of unlabeled data present in large graphs via semi-supervised learning.  ... 
arXiv:2102.06514v2 fatcat:fqahfsugobgo5m22gbeoxazy44

Recurrent Attention Walk for Semi-supervised Classification

Uchenna Akujuobi, Qiannan Zhang, Han Yufei, Xiangliang Zhang
2020 Proceedings of the 13th International Conference on Web Search and Data Mining  
In this paper, we study the graph-based semi-supervised learning for classifying nodes in a ributed networks, where the nodes and edges possess content information.  ...  We de ne the graph walk as a partially observable Markov decision process (POMDP). e proposed method is exible for working in both transductive and inductive se ing.  ...  [2] studied the use of deep generative models for graph-based semi-supervised learning. Hamilton et al.  ... 
doi:10.1145/3336191.3371853 dblp:conf/wsdm/AkujuobiZY020 fatcat:hhbdw3k3zzd73cjw4t3akw5ora

HopGAT: Hop-aware Supervision Graph Attention Networks for Sparsely Labeled Graphs [article]

Chaojie Ji, Ruxin Wang, Rongxiang Zhu, Yunpeng Cai, Hongyan Wu
2020 arXiv   pre-print
Especially, for the protein-protein interaction network, in a 40% labeled graph, the performance loss is only 3.9%, from 98.5% to 94.6%, compared to the fully labeled graph.  ...  This study first proposes a hop-aware attention supervision mechanism for the node classification task.  ...  In the graph domain, semi-supervised learning is widely used to address incomplete labels in a graph, as in [26, 27] . These studies mainly focus on graph representation.  ... 
arXiv:2004.04333v1 fatcat:mcujini44jbhzfo3wrw7ivl2sq

Graph Barlow Twins: A self-supervised representation learning framework for graphs [article]

Piotr Bielak, Tomasz Kajdanowicz, Nitesh V. Chawla
2022 arXiv   pre-print
Moreover, it does not rely on non-symmetric neural network architectures - in contrast to state-of-the-art self-supervised graph representation learning method BGRL.  ...  To overcome such limitations, we propose a framework for self-supervised graph representation learning - Graph Barlow Twins, which utilizes a cross-correlation-based loss function instead of negative samples  ...  setting), (3) for multiple graphs in the inductive setting using the PPI (Protein-Protein Interaction) dataset, and finally (4) for the large-scale graph dataset ogb-products in the inductive setting.  ... 
arXiv:2106.02466v2 fatcat:cqlplc5ilvbznnf6qy2o4tzcqu

Iterative Graph Self-Distillation [article]

Hanlin Zhang, Shuai Lin, Weiyang Liu, Pan Zhou, Jian Tang, Xiaodan Liang, Eric P. Xing
2021 arXiv   pre-print
Empirically, we achieve significant and consistent performance gain on various graph datasets in both unsupervised and semi-supervised settings, which well validates the superiority of IGSD.  ...  As a natural extension, we also apply IGSD to semi-supervised scenarios by jointly regularizing the network with both supervised and unsupervised contrastive loss.  ...  Semi-supervised Learning Modern semi-supervised learning can be categorized into two kinds: multi-task learning and consistency training between two separate networks.  ... 
arXiv:2010.12609v2 fatcat:h5csmfxatbg4jcliukikimsgnm

Graph Neural Networks: Methods, Applications, and Opportunities [article]

Lilapati Waikhom, Ripon Patgiri
2021 arXiv   pre-print
This article provides a comprehensive survey of graph neural networks (GNNs) in each learning setting: supervised, unsupervised, semi-supervised, and self-supervised learning.  ...  Various other domains conform to non-Euclidean space, for which graph is an ideal representation.  ...  Over the years, many methods are employed to do semi-supervised learning.  ... 
arXiv:2108.10733v2 fatcat:j3rfmkiwenebvmfyboasjmx4nu

GrAMME: Semi-Supervised Learning using Multi-layered Graph Attention Models [article]

Uday Shankar Shanthamallu, Jayaraman J. Thiagarajan, Huan Song and Andreas Spanias
2019 arXiv   pre-print
In this paper, we consider the problem of semi-supervised learning with multi-layered graphs. Though deep network embeddings, e.g.  ...  DeepWalk, are widely adopted for community discovery, we argue that feature learning with random node attributes, using graph neural networks, can be more effective.  ...  We also thank Prasanna Sattigeri for the useful discussions, and sharing data.  ... 
arXiv:1810.01405v2 fatcat:gydvjhal4jfbfi24bvnbsrpr2u

GPNet: Simplifying Graph Neural Networks via Multi-channel Geometric Polynomials [article]

Xun Liu, Alex Hay-Man Ng, Fangyuan Lei, Yikuan Zhang, Zhengmin Li
2022 arXiv   pre-print
Graph Neural Networks (GNNs) are a promising deep learning approach for circumventing many real-world problems on graph-structured data.  ...  types (i.e. homophily and heterophily) and scales (i.e. small, medium, and large) of networks, and combine them into a graph neural network, GPNet, a simple and efficient one-layer model.  ...  ACKNOWLEDGMENTS This research was funded by the National Natural Science Foundation of China (Grant No. 42274016/D0402, U1701266), the Program for Guangdong Introducing Innovative and Entrepreneurial Teams  ... 
arXiv:2209.15454v1 fatcat:ucu4jqw25zbtzivaxhjp4je6b4

Machine Learning on Graphs: A Model and Comprehensive Taxonomy [article]

Ines Chami, Sami Abu-El-Haija, Bryan Perozzi, Christopher Ré, Kevin Murphy
2022 arXiv   pre-print
The second, graph regularized neural networks, leverages graphs to augment neural network losses with a regularization objective for semi-supervised learning.  ...  Specifically, we propose a Graph Encoder Decoder Model (GRAPHEDM), which generalizes popular algorithms for semi-supervised learning on graphs (e.g.  ...  N000141712266 (Unifying Weak Supervision); the Moore Foundation, NXP, Xilinx, LETI-CEA, Intel, IBM, Microsoft, NEC, Toshiba, TSMC, ARM, Hitachi, BASF, Accenture, Ericsson, Qualcomm, Analog Devices, the  ... 
arXiv:2005.03675v3 fatcat:6eoicgprdvfbze732nsmpaumqe

Be More with Less: Hypergraph Attention Networks for Inductive Text Classification [article]

Kaize Ding, Jianling Wang, Jundong Li, Dingcheng Li, Huan Liu
2020 arXiv   pre-print
Recently, graph neural networks (GNNs) have received increasing attention in the research community and demonstrated their promising results on this canonical task.  ...  To address those issues, in this paper, we propose a principled model -- hypergraph attention networks (HyperGAT), which can obtain more expressive power with less computational consumption for text representation  ...  as a semi-supervised node classification problem.  ... 
arXiv:2011.00387v1 fatcat:jvvct7zx4vb6xbwnwljn3trvlq

Joint User Association and Power Allocation in Heterogeneous Ultra Dense Network via Semi-Supervised Representation Learning [article]

Xiangyu Zhang, Zhengming Zhang, Luxi Yang
2021 arXiv   pre-print
We model the HUDNs as a heterogeneous graph and train a Graph Neural Network (GNN) to approach this representation function by using semi-supervised learning, in which the loss function is composed of  ...  We separate the learning process into two parts, the generalization-representation learning (GRL) part and the specialization-representation learning (SRL) part, which train the GNN for learning representation  ...  However, it is hard to train on large graph. Contrastively, the inductive learning framework shows advantage in large graph by learning more general embedding for all nodes.  ... 
arXiv:2103.15367v1 fatcat:5orhmmgl6zgyllqqqcgm4rbdru

Outcome Correlation in Graph Neural Network Regression [article]

Junteng Jia, Austin Benson
2020 arXiv   pre-print
To allow us to scale to large networks, we design linear time algorithms for low-variance, unbiased model parameter estimates based on stochastic trace estimation.  ...  Graph neural networks aggregate features in vertex neighborhoods to learn vector representations of all vertices, using supervision from some labeled vertices during training.  ...  These problems fall under the umbrella of semi-supervised learning for graph-structured data.  ... 
arXiv:2002.08274v1 fatcat:ot2wpy6v2fb6dfu3hhmeqdkeca

Drug Similarity Integration Through Attentive Multi-view Graph Auto-Encoders

Tengfei Ma, Cao Xiao, Jiayu Zhou, Fei Wang
2018 Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence  
Our model has flexible design for both semi-supervised and unsupervised settings. Experimental results demonstrated significant predictive accuracy improvement.  ...  In particular, we model the integration using multi-view graph auto-encoders, and add attentive mechanism to determine the weights for each view with respect to corresponding tasks and features for better  ...  A Semi-supervised Extension Given Partial Labeled Data The graph auto-encoder (GAE) structure could be further extended to a semi-supervised setting when we have labels for some of the nodes in the graph  ... 
doi:10.24963/ijcai.2018/483 dblp:conf/ijcai/MaXZW18 fatcat:hed5o76zn5dgjgvqllbhjmvl3e

Drug Similarity Integration Through Attentive Multi-view Graph Auto-Encoders [article]

Tengfei Ma, Cao Xiao, Jiayu Zhou, Fei Wang
2018 arXiv   pre-print
Our model has flexible design for both semi-supervised and unsupervised settings. Experimental results demonstrated significant predictive accuracy improvement.  ...  In particular, we model the integration using multi-view graph auto-encoders, and add attentive mechanism to determine the weights for each view with respect to corresponding tasks and features for better  ...  A Semi-supervised Extension Given Partial Labeled Data The graph auto-encoder (GAE) structure could be further extended to a semi-supervised setting when we have labels for some of the nodes in the graph  ... 
arXiv:1804.10850v1 fatcat:fglyvhihv5dpfbwquvcqkt3x6a
« Previous Showing results 1 — 15 out of 5,187 results