Filters








1,014 Hits in 4.6 sec

Self-supervised Graph Learning for Recommendation [article]

Jiancan Wu, Xiang Wang, Fuli Feng, Xiangnan He, Liang Chen, Jianxun Lian, Xing Xie
2020 arXiv   pre-print
In this work, we explore self-supervised learning on user-item graph, so as to improve the accuracy and robustness of GCNs for recommendation.  ...  We devise four operators to generate the views – embedding masking, embedding dropout, node dropout, and edge dropout – that augment node representation from two perspectives of ID embedding and graph  ...  observed interactions for representation learning; (2) the augmentation operators, especially edge dropout, helps to mitigate the degree biases by changing the graph structure and reduce the influence  ... 
arXiv:2010.10783v1 fatcat:vaujjxnjdndcvcca4icqjdprye

Fairness-Aware Predictive Graph Learning in Social Networks

Lei Wang, Shuo Yu, Falih Gozi Febrinanto, Fayez Alqahtani, Tarek E. El-Tobely
2022 Mathematics  
To address this problem, we first formally define two biases (i.e., Preference and Favoritism) that widely exist in previous representation learning models.  ...  However, the paradigm of current graph learning methods generally neglects the differences in link strength, leading to discriminative predictive results, resulting in different performance between tasks  ...  Data Availability Statement: The datasets used in this paper are publicly available. Conflicts of Interest: The authors declare no conflict of interest.  ... 
doi:10.3390/math10152696 fatcat:m5ganc3hv5a2nprxidaokbk2iy

Relphormer: Relational Graph Transformer for Knowledge Graph Representation [article]

Zhen Bi, Siyuan Cheng, Ningyu Zhang, Xiaozhuan Liang, Feiyu Xiong, Huajun Chen
2022 arXiv   pre-print
Moreover, we propose masked knowledge modeling as a new paradigm for knowledge graph representation learning to unify different link prediction tasks.  ...  To this end, we propose a new variant of Transformer for knowledge graph representation dubbed Relphormer.  ...  In contrast to [20] where attention operations are only performed between nodes with literal edges in the original graph, structure-enhanced Transformer offers the flexibility in leveraging the local  ... 
arXiv:2205.10852v2 fatcat:7iw263mzxzehrhx4ogjadzc5mi

Graph Neural Network for Hamiltonian-Based Material Property Prediction [article]

Hexin Bai, Peng Chu, Jeng-Yuan Tsai, Nathan Wilson, Xiaofeng Qian, Qimin Yan, Haibin Ling
2020 arXiv   pre-print
With this motivation, we present and compare several different graph convolution networks that are able to predict the band gap for inorganic materials.  ...  Effective learning of material Hamiltonian by developing machine learning methodologies therefore offers a transformative approach to accelerate the discovery and design of quantum materials.  ...  In [3] , Battaglia et al. applied relational inductive biases in deep learning and presented a highly generalized framework for GNN.  ... 
arXiv:2005.13352v1 fatcat:mrrccgytpfgqvctqxc7guuro5i

Towards Job-Transition-Tag Graph for a Better Job Title Representation Learning [article]

Jun Zhu, Céline Hudelot
2022 arXiv   pre-print
Works on learning job title representation are mainly based on Job-Transition Graph, built from the working history of talents.  ...  Along this line, we reformulate job title representation learning as the task of learning node embedding on the Job-Transition-Tag Graph. Experiments on two datasets show the interest of our approach.  ...  Acknowledgment This work is supported by the Randstad research chair in collaboration with MICS Lab, Centrale-Supélec, Université Paris-Saclay.  ... 
arXiv:2206.02782v1 fatcat:qqxg2em3ybgt5berdpbyxqvuny

Group-Wise Semantic Mining for Weakly Supervised Semantic Segmentation [article]

Xueyi Li, Tianfei Zhou, Jianwu Li, Yi Zhou, Zhaoxiang Zhang
2020 arXiv   pre-print
Moreover, in order to prevent the model from paying excessive attention to common semantics only, we further propose a graph dropout layer, encouraging the model to learn more accurate and complete object  ...  In particular, we devise a graph neural network (GNN) for group-wise semantic mining, wherein input images are represented as graph nodes, and the underlying relations between a pair of images are characterized  ...  The graph dropout layer selectively suppresses the most salient objects, forcing the network to be biased toward their counterparts.  ... 
arXiv:2012.05007v1 fatcat:iolr2aitk5g5pmfuieirj5vxc4

Contrast-Enhanced Semi-supervised Text Classification with Few Labels

Austin Cheng-Yun Tsai, Sheng-Ya Lin, Li-Chen Fu
2022 PROCEEDINGS OF THE THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE AND THE TWENTY-EIGHTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE  
We propose a certainty-driven sample selection method and a contrast-enhanced similarity graph to utilize data more efficiently in self-training, alleviating the annotation-starving problem.  ...  A salient feature of this formulation is the explicit suppression of the severe error propagation problem in conventional semi-supervised learning.  ...  -2634-F-002-042-, 110-2634-F-002-016-, 110-2634-F-002-046-and 110-2634-F-002-049-, as well as Center for Artificial Intelligence and Advanced Robotics, National Taiwan University.  ... 
doi:10.1609/aaai.v36i10.21391 fatcat:5hu2gfdeczfuhk5zsrgnahzg3a

Beyond Real-world Benchmark Datasets: An Empirical Study of Node Classification with GNNs [article]

Seiji Maekawa, Koki Noda, Yuya Sasaki, Makoto Onizuka
2022 arXiv   pre-print
), 2) edge connection proportions between classes (homophilic vs. heterophilic), 3) attribute values (biased vs. random), and 4) graph sizes (small vs. large).  ...  Motivated by this, we conduct extensive experiments with a synthetic graph generator that can generate graphs having controlled characteristics for fine-grained analysis.  ...  As for H2GCN, the authors implemented it in TENSORFLOW, so we reimplemented it in PYTORCH for fair comparisons.  ... 
arXiv:2206.09144v1 fatcat:xleacv3n4ndelpicll4jbk7jvq

Learning Robust Node Representations on Graphs [article]

Xu Chen and Ya Zhang and Ivor Tsang and Yuangang Pan
2020 arXiv   pre-print
Graph neural networks (GNN), as a popular methodology for node representation learning on graphs, currently mainly focus on preserving the smoothness and identifiability of node representations.  ...  In this paper, we introduce the stability of node representations in addition to the smoothness and identifiability, and develop a novel method called contrastive graph neural networks (CGNN) that learns  ...  While smoothness and identifiability have proved essential in node representation learning on graphs, another necessary property for robust node representations is stability, especially for noisy graph  ... 
arXiv:2008.11416v2 fatcat:r35ugtwl6ff4hllk47fntpuwkq

Learning from the Past: Continual Meta-Learning with Bayesian Graph Neural Networks

Yadan Luo, Zi Huang, Zheng Zhang, Ziwei Wang, Mahsa Baktashmotlagh, Yang Yang
2020 PROCEEDINGS OF THE THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE AND THE TWENTY-EIGHTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE  
In this paper, we propose a novel Continual Meta-Learning approach with Bayesian Graph Neural Networks (CML-BGNN) that mathematically formulates meta-learning as continual learning of a sequence of tasks  ...  , which are seamlessly integrated into the end-to-end edge learning.  ...  Furthermore, we re-implemented the most powerful graph-based baseline EGNN with mini-batch size of 80 for fairness and present a detailed comparison in boxplots.  ... 
doi:10.1609/aaai.v34i04.5942 fatcat:y2kdu4n6rrf2hnff3xhwmbf7cq

Tackling Over-Smoothing for General Graph Convolutional Networks [article]

Wenbing Huang, Yu Rong, Tingyang Xu, Fuchun Sun, Junzhou Huang
2022 arXiv   pre-print
Upon this theorem, we propose DropEdge to alleviate over-smoothing by randomly removing a certain number of edges at each training epoch.  ...  The main cause of this lies in over-smoothing.  ...  Learning on graphs is crucial, not only for the analysis of the graph data themselves, but also for general data forms as graphs deliver strong inductive biases to enable relational reasoning and combinatorial  ... 
arXiv:2008.09864v5 fatcat:2clntpr6pjcqripnq3cio3thxu

Learning from the Past: Continual Meta-Learning via Bayesian Graph Modeling [article]

Yadan Luo, Zi Huang, Zheng Zhang, Ziwei Wang, Mahsa Baktashmotlagh, Yang Yang
2019 arXiv   pre-print
In this paper, we propose a novel Continual Meta-Learning approach with Bayesian Graph Neural Networks (CML-BGNN) that mathematically formulates meta-learning as continual learning of a sequence of tasks  ...  , which are seamlessly integrated into the end-to-end edge learning.  ...  Furthermore, we re-implemented the most powerful graph-based baseline EGNN with mini-batch size of 80 for fairness and present a detailed comparison in boxplots.  ... 
arXiv:1911.04695v1 fatcat:mtzdht6xondezhuamhq4kqnutq

Revisiting the role of heterophily in graph representation learning: An edge classification perspective [article]

Jincheng Huang, Ping Li, Rui Huang, Chen Na, Acong Zhang
2022 arXiv   pre-print
Graph representation learning aim at integrating node contents with graph structure to learn nodes/graph representations.  ...  From this perspective, here we study the role of heterophily in graph representation learning before/after the relationships between connected nodes are disclosed.  ...  Accordingly, by combing high-frequency features with low-frequency ones graph representation learning is able to be enhanced.  ... 
arXiv:2205.11322v2 fatcat:4xhmoirn5fe43e5mxodaywau6y

Local Structural Aware Heterogeneous Information Network Embedding Based on Relational Self-Attention Graph Neural Network

Meng Cao, Jinliang Yuan, Ming Xu, Hualei Yu, Chongjun Wang
2021 IEEE Access  
In addition, we employ a biased random walk based sampling method to extract the local structural information and preserve the implicit semantics in HINs.  ...  For more information, see https://creativecommons.org/licenses/by/4.0/  ...  For graph neural network related baselines, we set the number of layers L = 2, the initial learning rate lr = 0.01, weight decay to 5 × e −4 , and dropout rate to 0.5.  ... 
doi:10.1109/access.2021.3090055 fatcat:6ynffvfhgvgcndcpd3374jrwyu

Multi-Hop Knowledge Graph Reasoning with Reward Shaping

Xi Victoria Lin, Richard Socher, Caiming Xiong
2018 Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing  
Multi-hop reasoning is an effective approach for query answering (QA) over incomplete knowledge graphs (KGs).  ...  The problem can be formulated in a reinforcement learning (RL) setup, where a policy-based agent sequentially extends its inference path until it reaches a target.  ...  We thank Fréderic Godin for pointing out an error in Equation 8 in an early version of the paper.  ... 
doi:10.18653/v1/d18-1362 dblp:conf/emnlp/LinSX18 fatcat:pqubglr3mzdqxaqzckqwcd3eo4
« Previous Showing results 1 — 15 out of 1,014 results