A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2021; you can also visit the original URL.
The file type is application/pdf
.
Filters
Adversarial Attack on Graph Neural Networks as An Influence Maximization Problem
[article]
2021
arXiv
pre-print
Our formal analysis draws a connection between this type of attacks and an influence maximization problem on the graph. ...
Graph neural networks (GNNs) have attracted increasing interests. ...
We formulate the problem of adversarial attack on GNNs as an optimization problem to maximize the mis-classification rate. 2. ...
arXiv:2106.10785v1
fatcat:jtivdltkkbchtoo42da34zwnze
Adversarial Diffusion Attacks on Graph-based Traffic Prediction Models
[article]
2021
arXiv
pre-print
Our code is available at https://github.com/LYZ98/Adversarial-Diffusion-Attacks-on-Graph-based-Traffic-Prediction-Models ...
With the availability of massive traffic data, neural network-based deep learning methods, especially the graph convolutional networks (GCN) have demonstrated outstanding performance in mining spatio-temporal ...
Adversarial Attacks On Graphs Like other neural networks, GCN is vulnerable to adversarial attacks. ...
arXiv:2104.09369v1
fatcat:sqsdhzogavcqbjidnd2xbg7qcy
Adversarial Attacks on Neural Networks for Graph Data
2019
Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence
In this extended abstract we summarize the key findings and contributions of our work, in which we introduce the first study of adversarial attacks on attributed graphs, specifically focusing on models ...
In addition to attacks at test time, we tackle the more challenging class of poisoning/causative attacks, which focus on the training phase of a machine learning model. ...
Conclusion We present the first work on adversarial attacks on graph neural networks. Our attacks target the nodes' features and the graph structure. ...
doi:10.24963/ijcai.2019/872
dblp:conf/ijcai/ZugnerAG19
fatcat:2hnen4ucjzgmbchp6bgxd7lqzu
Bluff: Interactively Deciphering Adversarial Attacks on Deep Neural Networks
[article]
2020
arXiv
pre-print
We present Bluff, an interactive system for visualizing, characterizing, and deciphering adversarial attacks on vision-based neural networks. ...
Deep neural networks (DNNs) are now commonly used in many domains. ...
Existing works to interpret adversarial attacks on deep neural networks often focus on visualizing the activation patterns for a single adversarial input [6, 29] . ...
arXiv:2009.02608v2
fatcat:uxefisw4sza6vepf3l7sbimlwu
Adversarial Attacks and Defenses on Graphs: A Review, A Tool and Empirical Studies
[article]
2020
arXiv
pre-print
As the extensions of DNNs to graphs, Graph Neural Networks (GNNs) have been demonstrated to inherit this vulnerability. ...
The repository enables us to conduct empirical studies to deepen our understandings on attacks and defenses on graphs. ...
In addition to graph neural networks, adversary may attack some other important algorithms for graphs such as network embeddings including LINE [59] and Deepwalk [51] , graph-based semi-supervised learning ...
arXiv:2003.00653v3
fatcat:q26p26cvezfelgjtksmi3fxrtm
Jointly Attacking Graph Neural Network and its Explanations
[article]
2021
arXiv
pre-print
This finding motivates us to further initiate a new problem investigation: Whether a graph neural network and its explanations can be jointly attacked by modifying graphs with malicious desires? ...
Graph Neural Networks (GNNs) have boosted the performance for many graph-related tasks. ...
Motivated by the fact that GNNEXPLAINER can act as an inspection tool for graph adversarial perturbations, we further investigate a new problem: Whether a graph neural network and its explanations can ...
arXiv:2108.03388v1
fatcat:ahqvnvozjjac3pnobhao7qjm6y
Adversarial Attack on Large Scale Graph
[article]
2021
arXiv
pre-print
We evaluate our attack method on four real-world graph networks by attacking several commonly used GNNs. ...
In addition, we present a practical metric named Degree Assortativity Change (DAC) to measure the impacts of adversarial attacks on graph data. ...
So far, much of the current work on attacking graph neural networks has concentrated on node classification task [12] , [13] , [14] , [15] . ...
arXiv:2009.03488v2
fatcat:coubuhvby5frjcqlg35ursb6e4
Robust Unsupervised Graph Representation Learning via Mutual Information Maximization
[article]
2022
arXiv
pre-print
To tackle these problems, we further propose an effective mutual information estimator with subgraph-level summary and an efficient adversarial training strategy with only feature perturbations. ...
There are mainly two challenges to estimate GRR: 1) mutual information estimation upon adversarially attacked graphs; 2) high complexity of adversarial attack to perturb node features and graph structure ...
Thus only considering adversarial attack on features can be a practical way for adversarial training on graph neural networks. ...
arXiv:2201.08557v1
fatcat:qphgm7jms5bmbamd27yo7obcre
Reinforcement Learning For Data Poisoning on Graph Neural Networks
[article]
2021
arXiv
pre-print
We will study the novel problem of Data Poisoning (training time) attack on Neural Networks for Graph Classification using Reinforcement Learning Agents. ...
In the last two years, interest has surged in adversarial attacks on graphs yet the Graph Classification setting remains nearly untouched. ...
in the areas of Generative Adversarial Networks (GANs) and Adversarial Training [3] , [4] has influenced a sub-field of Deep Learning focused on exactly the above, where the synthesis of viable attacks ...
arXiv:2102.06800v1
fatcat:xme7cul6wfdkpgre6efsu2q4hu
GNNGuard: Defending Graph Neural Networks against Adversarial Attacks
[article]
2020
arXiv
pre-print
However, recent findings indicate that small, unnoticeable perturbations of graph structure can catastrophically reduce performance of even the strongest and most popular Graph Neural Networks (GNNs). ...
on heterophily graphs. ...
Background on graph neural networks. ...
arXiv:2006.08149v3
fatcat:xkwcnoezwffgvp642hfcixjqwa
Adversarial Attacks and Defenses in Images, Graphs and Text: A Review
[article]
2019
arXiv
pre-print
As a result, we have witnessed increasing interests in studying attack and defense mechanisms for DNN models on different data types, such as images, graphs and text. ...
Deep neural networks (DNN) have achieved unprecedented success in numerous machine learning tasks in various domains. ...
In contrast, the work (Wong & Kolter, 2017) transforms the problem (8) into a linear programming problem and solves the problem via training an alternative neural network. ...
arXiv:1909.08072v2
fatcat:i3han24f3fdgpop45t4pmxcdtm
TDGIA:Effective Injection Attacks on Graph Neural Networks
[article]
2021
arXiv
pre-print
Graph Neural Networks (GNNs) have achieved promising performance in various real-world applications. However, recent studies have shown that GNNs are vulnerable to adversarial attacks. ...
We present an analysis on the topological vulnerability of GNNs under GIA setting, based on which we propose the Topological Defective Graph Injection Attack (TDGIA) for effective injection attacks. ...
We thank Yang Yang for helpful discussions, Shuwen Liu and Yicheng Zhao for the dataset preparation work on KDD CUP 2020, and all participants of KDD CUP 2020 ML2 Track. ...
arXiv:2106.06663v1
fatcat:hajxxhjodrbibhegp7iubi7oba
Improving Adversarial Robustness for Free with Snapshot Ensemble
[article]
2021
arXiv
pre-print
Adversarial training, as one of the few certified defenses against adversarial attacks, can be quite complicated and time-consuming, while the results might not be robust enough. ...
networks and the memory to store the results. ...
An adversarial attack is, therefore, an optimization procedure to solve the constrained maximization problem (1). ...
arXiv:2110.03124v1
fatcat:7gl4loxdprgwrofufvmko3qlsy
Correlation Analysis between the Robustness of Sparse Neural Networks and their Random Hidden Structural Priors
[article]
2021
arXiv
pre-print
To answer to this hypothesis, we designed an empirical study with neural network models obtained through random graphs used as sparse structural priors for the networks. ...
Our hypothesis is, that graph theoretic properties as a prior of neural network structures are related to their robustness. ...
The resulting sparse neural network can be denoted as f , being an image classifier on input x with f (x) = z L (x) with L being the last or maximal layer of the graph. ...
arXiv:2107.06158v1
fatcat:jg5annrhkraqnarxpjj6tjscgy
Adversarial Attack and Defense: A Survey
2022
Electronics
This type of attack is called adversarial attack, which greatly limits the promotion of deep neural networks in tasks with extremely high security requirements. ...
However, the latest research shows that deep neural networks are vulnerable to attacks from adversarial example and output wrong results. ...
Then, in order to quantify the influence of the change of pixel value on the target classifier, JSMA proposed the construction of adversarial significance graph based on Jacobian matrix, as shown below ...
doi:10.3390/electronics11081283
fatcat:qnsew6gionetzj2paexche375m
« Previous
Showing results 1 — 15 out of 2,085 results