Filters








829 Hits in 4.1 sec

Transferring Robustness for Graph Neural Network Against Poisoning Attacks [article]

Xianfeng Tang, Yandong Li, Yiwei Sun, Huaxiu Yao, Prasenjit Mitra, Suhang Wang
2019 arXiv   pre-print
It is very challenging to design robust graph neural networks against poisoning attack and several efforts have been taken.  ...  Graph neural networks (GNNs) are widely used in many applications. However, their robustness against adversarial attacks is criticized.  ...  Graph Neural Networks In general, graph neural networks refer to all deep learning methods for graph data [36] .  ... 
arXiv:1908.07558v1 fatcat:bfxgnkerp5a2lnxdggrss4jfgy

Transferable Graph Backdoor Attack [article]

Shuiqiao Yang, Bao Gia Doan, Paul Montague, Olivier De Vel, Tamas Abraham, Seyit Camtepe, Damith C. Ranasinghe, Salil S. Kanhere
2022 arXiv   pre-print
Graph Neural Networks (GNNs) have achieved tremendous success in many graph mining tasks benefitting from the message passing strategy that fuses the local structure and node features for better graph  ...  The core attack principle is to poison the training dataset with perturbation-based triggers that can lead to an effective and transferable backdoor attack.  ...  Recently, Graph Neural Networks (GNNs) have achieved great success in graph-structured data processing by learning effective graph representations via message passing strategies, which recursively aggregate  ... 
arXiv:2207.00425v2 fatcat:i55ezhpivfha3bh3sijz5dbjxm

BinarizedAttack: Structural Poisoning Attacks to Graph-based Anomaly Detection [article]

Yulin Zhu, Yuni Lai, Kaifa Zhao, Xiapu Luo, Mingquan Yuan, Jian Ren, Kai Zhou
2022 arXiv   pre-print
Furthermore, we investigate the attack transferability of BinarizedAttack by employing it to attack other representation-learning-based GAD systems.  ...  transfer attack setting, BinarizedAttack is also tested effective and in particular, can significantly change the node embeddings learned by the GAD systems.  ...  More recently, there are methods based on graph representation learning [9] , [10] , whose key component is to learn the node embeddings via various technical as such graph neural networks.  ... 
arXiv:2106.09989v5 fatcat:gye6nrb46rce7gupsz5ds6yw34

Backdoor Attacks to Graph Neural Networks [article]

Zaixi Zhang and Jinyuan Jia and Binghui Wang and Neil Zhenqiang Gong
2021 arXiv   pre-print
In this work, we propose the first backdoor attack to graph neural networks (GNN). Specifically, we propose a subgraph based backdoor attack to GNN for graph classification.  ...  Moreover, we generalize a randomized smoothing based certified defense to defend against our backdoor attacks.  ...  [34] proposed to inject a backdoor to a neural network via fine tuning, which does not need to poison the training dataset. Yao et al.  ... 
arXiv:2006.11165v4 fatcat:grtdn4jjqbh7zknr7ys63u7n4a

Adversarial Attacks and Defenses on Graphs: A Review, A Tool and Empirical Studies [article]

Wei Jin, Yaxin Li, Han Xu, Yiqi Wang, Shuiwang Ji, Charu Aggarwal, Jiliang Tang
2020 arXiv   pre-print
As the extensions of DNNs to graphs, Graph Neural Networks (GNNs) have been demonstrated to inherit this vulnerability.  ...  Deep neural networks (DNNs) have achieved significant performance in various tasks.  ...  Since the poisoning attacks insert poisons into the training graph, purification methods aim to purify the poisoned graph and learn robust graph neural network models based on it.  ... 
arXiv:2003.00653v3 fatcat:q26p26cvezfelgjtksmi3fxrtm

GNNGuard: Defending Graph Neural Networks against Adversarial Attacks [article]

Xiang Zhang, Marinka Zitnik
2020 arXiv   pre-print
The revised edges allow for robust propagation of neural messages in the underlying GNN.  ...  However, recent findings indicate that small, unnoticeable perturbations of graph structure can catastrophically reduce performance of even the strongest and most popular Graph Neural Networks (GNNs).  ...  [20] improve the robustness of GNNs against poisoning attack through transfer learning but has a limitation that requires several unperturbed graphs from the similar domain during training.  ... 
arXiv:2006.08149v3 fatcat:xkwcnoezwffgvp642hfcixjqwa

Reinforcement Learning For Data Poisoning on Graph Neural Networks [article]

Jacob Dineen, A S M Ahsan-Ul Haque, Matthew Bielskas
2021 arXiv   pre-print
We will study the novel problem of Data Poisoning (training time) attack on Neural Networks for Graph Classification using Reinforcement Learning Agents.  ...  Adversarial Machine Learning has emerged as a substantial subfield of Computer Science due to a lack of robustness in the models we train along with crowdsourcing practices that enable attackers to tamper  ...  Finely-tuned, and extensively trained neural networks may be less prone to poison attacks. D.  ... 
arXiv:2102.06800v1 fatcat:xme7cul6wfdkpgre6efsu2q4hu

Adversarial Attacks and Defenses in Images, Graphs and Text: A Review [article]

Han Xu, Yao Ma, Haochen Liu, Debayan Deb, Hui Liu, Jiliang Tang, Anil K. Jain
2019 arXiv   pre-print
Deep neural networks (DNN) have achieved unprecedented success in numerous machine learning tasks in various domains.  ...  As a result, we have witnessed increasing interests in studying attack and defense mechanisms for DNN models on different data types, such as images, graphs and text.  ...  Graph Structure Poisoning via Meta-Learning Previous graph attack works only focus on attacking one single victim node.  ... 
arXiv:1909.08072v2 fatcat:i3han24f3fdgpop45t4pmxcdtm

Graph Structure Learning for Robust Graph Neural Networks [article]

Wei Jin, Yao Ma, Xiaorui Liu, Xianfeng Tang, Suhang Wang, Jiliang Tang
2020 arXiv   pre-print
Graph Neural Networks (GNNs) are powerful tools in representation learning for graphs.  ...  In particular, we propose a general framework Pro-GNN, which can jointly learn a structural graph and a robust graph neural network model from the perturbed graph guided by these properties.  ...  Objective Function of Pro-GNN Intuitively, we can follow the preprocessing strategy [12, 32] to defend against adversarial attacks -we first learn a graph from the poisoned graph via Eq.  ... 
arXiv:2005.10203v3 fatcat:6imhudo6rvhtxiwfamxzgqpdni

Not All Low-Pass Filters are Robust in Graph Convolutional Networks

Heng Chang, Yu Rong, Tingyang Xu, Yatao Bian, Shiji Zhou, Xin Wang, Junzhou Huang, Wenwu Zhu
2021 Neural Information Processing Systems  
Graph Convolutional Networks (GCNs) are promising deep learning approaches in learning representations for graph-structured data.  ...  To this end, GCN-LFR could enhance the robustness of various kinds of GCN-based models against poisoning structural attacks in a plug-and-play manner.  ...  Adversarial attacks on graph neural networks have drawn unprecedented attention from researchers recently [47, 24, 49] .  ... 
dblp:conf/nips/ChangRXBZWHZ21 fatcat:viabflxikvg4bnl4rdbb5kt6pa

Attacking Graph-based Classification via Manipulating the Graph Structure [article]

Binghui Wang, Neil Zhenqiang Gong
2019 arXiv   pre-print
We evaluate our attacks and compare them with a recent attack designed for graph neural networks.  ...  the existing attack for evading collective classification methods and some graph neural network methods.  ...  Graph Neural Network These methods aim to generalize neural networks to graph data. In particular, they learn feature vectors for nodes using neural networks and use them to classify nodes.  ... 
arXiv:1903.00553v2 fatcat:fkkvxas3andhpknrqrx4bhpthm

Backdoor Attacks and Countermeasures on Deep Learning: A Comprehensive Review [article]

Yansong Gao, Bao Gia Doan, Zhi Zhang, Siqi Ma, Jiliang Zhang, Anmin Fu, Surya Nepal, Hyoungshick Kim
2020 arXiv   pre-print
According to the attacker's capability and affected stage of the machine learning pipeline, the attack surfaces are recognized to be wide and then formalized into six categorizations: code poisoning, outsourcing  ...  This work provides the community with a timely comprehensive review of backdoor attacks and countermeasures on deep learning.  ...  The recent whirlwind backdoor attacks [6] - [8] against deep learning models (deep neural networks (DNNs)), exactly fit such insidious adversarial purposes.  ... 
arXiv:2007.10760v3 fatcat:6i4za6345jedbe5ek57l2tgld4

Robust Graph Neural Networks using Weighted Graph Laplacian [article]

Bharat Runwal, Vivek, Sandeep Kumar
2022 arXiv   pre-print
Graph neural network (GNN) is achieving remarkable performances in a variety of application domains. However, GNN is vulnerable to noise and adversarial attacks in input data.  ...  Making GNN robust against noises and adversarial attacks is an important problem. The existing defense methods for GNNs are computationally demanding and are not scalable.  ...  Adversarial attacks on graph neural networks via meta learning. 2019. 3, 8 Daniel Zügner, Amir Akbarnejad, and Stephan Günnemann. Adversarial attacks on neural networks for graph data.  ... 
arXiv:2208.01853v1 fatcat:5lrszwt3qvfk7icv4lopwgnznq

Expressive 1-Lipschitz Neural Networks for Robust Multiple Graph Learning against Adversarial Attacks

Xin Zhao, Zeru Zhang, Zijie Zhang, Lingfei Wu, Jiayin Jin, Yang Zhou, Ruoming Jin, Dejing Dou, Da Yan
2021 International Conference on Machine Learning  
The combination of norm-constrained f and Wl leads to the 1-Lipschitz neural network for expressive and robust multiple graph learning.  ...  This paper proposes an attackagnostic graph-adaptive 1-Lipschitz neural network, ERNN, for improving the robustness of deep multiple graph learning while achieving remarkable expressive power.  ...  We validate that the proposed robust learning strategies are transferable to other popular graph learning tasks in Appendix A.2.  ... 
dblp:conf/icml/ZhaoZZWJZJD021 fatcat:7thqnpakwvcm5emlwhxry2on4i

Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses [article]

Micah Goldblum, Dimitris Tsipras, Chulin Xie, Xinyun Chen, Avi Schwarzschild, Dawn Song, Aleksander Madry, Bo Li, Tom Goldstein
2021 arXiv   pre-print
In addition to describing various poisoning and backdoor threat models and the relationships among them, we develop their unified taxonomy.  ...  The goal of this work is to systematically categorize and discuss a wide range of dataset vulnerabilities and exploits, approaches for defending against these threats, and an array of open problems in  ...  While feature collision attacks are most effective when deployed against transfer learning, bilevel methods are highly effective against both transfer learning and end-to-end training.  ... 
arXiv:2012.10544v4 fatcat:2tpz6l2dpbgrjcyf5yxxv3pvii
« Previous Showing results 1 — 15 out of 829 results