A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2022; you can also visit the original URL.
The file type is application/pdf
.
Adversarial Graph Perturbations for Recommendations at Scale
2022
Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval
Graph Neural Networks (GNNs) provide a class of powerful architectures that are effective for graph-based collaborative filtering. Nevertheless, GNNs are known to be vulnerable to adversarial perturbations. Adversarial training is a simple yet effective way to improve the robustness of neural models. For example, many prior studies inject adversarial perturbations into either node features or hidden layers of GNNs. However, perturbing graph structures has been far less studied in
doi:10.1145/3477495.3531763
fatcat:hhkyne2ok5br7n5ajal5h76fbi