Filters








4 Hits in 1.5 sec

Graph Neural Networks in Recommender Systems: A Survey

Shiwen Wu, Fei Sun, Wentao Zhang, Xu Xie, Bin Cui
2022 ACM Computing Surveys  
Recently, graph neural network (GNN) techniques have been widely utilized in recommender systems since most of the information in recommender systems essentially has graph structure and GNN has superiority  ...  in graph representation learning.  ...  two-level attention network for propagation.  ... 
doi:10.1145/3535101 fatcat:hgv2tbx3k5hzbnkupwsysqwjmy

Graph Neural Networks in Recommender Systems: A Survey [article]

Shiwen Wu, Fei Sun, Wentao Zhang, Xu Xie, Bin Cui
2022 arXiv   pre-print
Recently, graph neural network (GNN) techniques have been widely utilized in recommender systems since most of the information in recommender systems essentially has graph structure and GNN has superiority  ...  in graph representation learning.  ...  two-level attention network for propagation.  ... 
arXiv:2011.02260v4 fatcat:hvk22yyid5bzjnzmzchyti25ja

Feature-level Attentive ICF for Recommendation [article]

Zhiyong Cheng and Fan Liu and Shenghan Mei and Yangyang Guo and Lei Zhu and Liqiang Nie
2021 arXiv   pre-print
To demonstrate the effectiveness of our method, we design a light attention neural network to integrate both item-level and feature-level attention for neural ICF models.  ...  In this work, we propose a general feature-level attention method for ICF models.  ...  Different from this method, DisenHAN [63] learns the disentangled representations by aggregating aspect features from different meta relations in a heterogeneous information network (HIN).  ... 
arXiv:2102.10745v2 fatcat:wz3luznoqnfw5newwr3bco32za

DisenKGAT: Knowledge Graph Embedding with Disentangled Graph Attention Network [article]

Junkang Wu, Wentao Shi, Xuezhi Cao, Jiawei Chen, Wenqiang Lei, Fuzheng Zhang, Wei Wu, Xiangnan He
2021 pre-print
In this work, we propose a novel Disentangled Knowledge Graph Attention Network (DisenKGAT) for KGC, which leverages both micro-disentanglement and macro-disentanglement to exploit representations behind  ...  For macro-disentanglement, we leverage mutual information as a regularization to enhance independence.  ...  The above models develop the disentanglement idea in homogeneous networks, while DisenHAN [39] focus on Heterogeneous networks.  ... 
doi:10.1145/3459637.3482424 arXiv:2108.09628v1 fatcat:k2cekvcwqrfttpngeazvprbhsu