A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2022; you can also visit the original URL.
The file type is application/pdf
.
Adversarial attack detection on graph node classification using autoencoders for hidden layers in GCN
GCN中間層出力の自己符号化を用いたノード分類に対する敵対的攻撃の検出
JSAI Technical Report, SIG-KBS
GCN中間層出力の自己符号化を用いたノード分類に対する敵対的攻撃の検出
The vulnerability of Deep Neural Network (DNN) to adversarial attacks has become an issue in recent years. Attackers can degrade the accuracy of DNN models by introducing computational noise in the input data. Graph Convolutional Network (GCN), a derivative of DNN, have also been found to have similar vulnerabilities. In this study, we propose a method for adversarial attack detection in GCN. More concretely, we build a model for the attack detection from latent information of each vertex
doi:10.11517/jsaikbs.123.0_13
fatcat:mz24el4lpvemtojamfwj44g3da