Adversarial attack detection on graph node classification using autoencoders for hidden layers in GCN
GCN中間層出力の自己符号化を用いたノード分類に対する敵対的攻撃の検出

Kenta SHIMADA, Tomonobu OZAKI
JSAI Technical Report, SIG-KBS  
The vulnerability of Deep Neural Network (DNN) to adversarial attacks has become an issue in recent years. Attackers can degrade the accuracy of DNN models by introducing computational noise in the input data. Graph Convolutional Network (GCN), a derivative of DNN, have also been found to have similar vulnerabilities. In this study, we propose a method for adversarial attack detection in GCN. More concretely, we build a model for the attack detection from latent information of each vertex
more » ... ed by the application of autoencoders to hidden layers. The effectiveness of the proposed method was evaluated using real world graph datasets.
doi:10.11517/jsaikbs.123.0_13 fatcat:mz24el4lpvemtojamfwj44g3da