A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2021; you can also visit the original URL.
The file type is application/pdf
.
Filters
Graph Robustness Benchmark: Benchmarking the Adversarial Robustness of Graph Machine Learning
[article]
2021
arXiv
pre-print
Adversarial attacks on graphs have posed a major threat to the robustness of graph machine learning (GML) models. Naturally, there is an ever-escalating arms race between attackers and defenders. ...
To bridge this gap, we present the Graph Robustness Benchmark (GRB) with the goal of providing a scalable, unified, modular, and reproducible evaluation for the adversarial robustness of GML models. ...
Learning
Problem Definition In graph machine learning, adversarial robustness refers to the ability of GML models to maintain their performance under potential adversarial attacks. ...
arXiv:2111.04314v1
fatcat:2vvox2265fgdxl4rpcbzncirum
Graph Structure Learning for Robust Graph Neural Networks
[article]
2020
arXiv
pre-print
Therefore, developing robust algorithms to defend adversarial attacks is of great significance. A natural idea to defend adversarial attacks is to clean the perturbed graph. ...
In particular, we propose a general framework Pro-GNN, which can jointly learn a structural graph and a robust graph neural network model from the perturbed graph guided by these properties. ...
ACKNOWLEDGEMENTS This research is supported by the National Science Foundation (NSF) under grant numbers IIS1907704, IIS1928278, IIS1714741, IIS1715940, IIS1845081, IIS1909702 and CNS1815636. ...
arXiv:2005.10203v3
fatcat:6imhudo6rvhtxiwfamxzgqpdni
Robust Certification for Laplace Learning on Geometric Graphs
[article]
2021
arXiv
pre-print
Understanding and certifying the adversarial robustness of machine learning (ML) algorithms has attracted large amounts of attention from different research communities due to its crucial importance in ...
Graph Laplacian (GL)-based semi-supervised learning is one of the most used approaches for classifying nodes in a graph. ...
Acknowledgments This material is based on research sponsored by the NSF grant DMS-1924935 and DMS-1952339, and the DOE grant DE-SC0021142. ...
arXiv:2104.10837v1
fatcat:kuttvg2li5bgll5uve4xspk3dy
Unveiling the potential of Graph Neural Networks for robust Intrusion Detection
[article]
2021
arXiv
pre-print
This unprecedented level of robustness is mainly induced by the capability of our GNN model to learn flow patterns of attacks structured as graphs. ...
Recent works propose the use of Machine Learning (ML) techniques for building such systems (e.g., decision trees, neural networks). ...
This paper reflects only the author's view; the European Commission is not responsible for any use that may be made of the information it contains. ...
arXiv:2107.14756v1
fatcat:ealxk4aozreolpasbtsikzahtq
A Regularized Attention Mechanism for Graph Attention Networks
[article]
2020
arXiv
pre-print
Using benchmark datasets, we demonstrate performance improvements on semi-supervised learning, using the proposed robust variant of GAT. ...
Graph attention networks (GAT), a recent addition to the broad class of feature learning models in graphs, utilizes the attention mechanism to efficiently learn continuous vector representations for semi-supervised ...
Consequently, machine learning formalisms for graph-structured data [5, 6] have become prominent, and are regularly being adopted for information extraction and analysis. ...
arXiv:1811.00181v2
fatcat:2li2pqrwpfcmjnjeioe7jun5c4
Robust Unsupervised Graph Representation Learning via Mutual Information Maximization
[article]
2022
arXiv
pre-print
Therefore, this paper focuses on robust unsupervised graph representation learning. ...
In particular, to quantify the robustness of GNNs without label information, we propose a robustness measure, named graph representation robustness (GRR), to evaluate the mutual information between adversarially ...
• Extensive experimental results over five benchmarks demonstrate that our method is capable of learning more robust node representations against adversarial attacks. ...
arXiv:2201.08557v1
fatcat:qphgm7jms5bmbamd27yo7obcre
Shift-Robust GNNs: Overcoming the Limitations of Localized Graph Training Data
[article]
2021
arXiv
pre-print
There has been a recent surge of interest in designing Graph Neural Networks (GNNs) for semi-supervised learning tasks. ...
We illustrate the effectiveness of SR-GNN in a variety of experiments with biased training datasets on common GNN benchmark datasets for semi-supervised learning, where we see that SR-GNN outperforms other ...
Acknowledgments and Disclosure of Funding ...
arXiv:2108.01099v2
fatcat:63smpspbd5bejfyzmw5pyyleki
AutoBayes: Automated Bayesian Graph Exploration for Nuisance-Robust Inference
[article]
2020
arXiv
pre-print
We benchmark the framework on several public datasets, where we have access to subject and class labels during training, and provide analysis of its capability for subject-transfer learning with/without ...
nuisance-invariant machine learning pipelines. ...
INTRODUCTION The great advancement of deep learning techniques based on deep neural networks (DNN) has enabled more practical design of human-machine interfaces (HMI) through the analysis of the user's ...
arXiv:2007.01255v2
fatcat:su2j7qcsfndsnbxp4nb2dnv2c4
Shift-Robust Node Classification via Graph Adversarial Clustering
[article]
2022
arXiv
pre-print
Then a shift-robust classifier is optimized on training graph and adversarial samples on target graph, which are generated by cluster GNN. ...
We introduce an unsupervised cluster GNN on target graph to group the similar nodes by graph homophily. An adversarial loss with label information on source graph is used upon clustering objective. ...
Domain adaption transfers machine learning models trained on the source domain to the related target domain. ...
arXiv:2203.15802v1
fatcat:2niqagpbxzapjao27al5z6fja4
On the Relationship between Heterophily and Robustness of Graph Neural Networks
[article]
2021
arXiv
pre-print
Empirical studies on the robustness of graph neural networks (GNNs) have suggested a relation between the vulnerabilities of GNNs to adversarial attacks and the increased presence of heterophily in perturbed ...
In this work, we formalize the relation between heterophily and robustness, bridging two topics previously investigated by separate lines of research. ...
We gratefully acknowledge the support of NVIDIA Corporation with ...
arXiv:2106.07767v2
fatcat:y2j3ox4ibbar3bl6coiesry32a
Towards an Efficient and General Framework of Robust Training for Graph Neural Networks
[article]
2020
arXiv
pre-print
of the discrete graph domain. ...
large-scale problems which were out of the reach of traditional robust training methods. ...
Recently, a few attempts have been carried out in the robust machine learning community to devise robust GNN training methods, e.g., adversarial training. ...
arXiv:2002.10947v1
fatcat:wlc3uamklvcwfcxfzq7quah4e4
Robust Graph Neural Networks via Ensemble Learning
2022
Mathematics
Therefore, in this paper, we propose a novel framework of graph ensemble learning based on knowledge passing (called GEL) to address the above issues. ...
Graph neural networks (GNNs) have demonstrated a remarkable ability in the task of semi-supervised node classification. ...
The authors would like to thank Lei Wang and Liuwei Fu for their help with the experiments. ...
doi:10.3390/math10081300
fatcat:fq6gwvyy6reivh4e45drfow6ga
Unsupervised Adversarially-Robust Representation Learning on Graphs
[article]
2021
arXiv
pre-print
Yet, the adversarial robustness of such pre-trained graph learning models remains largely unexplored. ...
We then formulate an optimization problem to learn the graph representation by carefully balancing the trade-off between the expressive power and the robustness (i.e., GRV) of the graph encoder. ...
When tackling the adversarial vulnerability problem, we focus on the robustness of graph representation learning. ...
arXiv:2012.02486v2
fatcat:yl6xyowln5dzvdupz6zwugygf4
Robustness of Graph Neural Networks at Scale
[article]
2021
arXiv
pre-print
Graph Neural Networks (GNNs) are increasingly important given their popularity and the diversity of applications. ...
Yet, existing studies of their vulnerability to adversarial attacks rely on relatively small graphs. We address this gap and study how to attack and defend GNNs at scale. ...
Acknowledgments and Disclosure of Funding This research was supported by the Helmholtz Association under the joint research school "Munich School for Data Science -MUDS". ...
arXiv:2110.14038v3
fatcat:umiz3dcl4bcndkszki64saxu4m
GARNET: Reduced-Rank Topology Learning for Robust and Scalable Graph Neural Networks
[article]
2022
arXiv
pre-print
In this paper, we propose GARNET, a scalable spectral method to boost the adversarial robustness of GNN models. ...
Although there are several defense methods to improve GNN robustness by eliminating adversarial components, they may also impair the underlying clean graph structure that contributes to GNN training. ...
We show the hyperparameters of the reduced-rank approximation kernel and the adaptive filter learning kernel on different datasets under Nettack (1 perturbation per node), Metattack (10% perturbation ratio ...
arXiv:2201.12741v2
fatcat:eldaiuofvrf2pc75ssw7bfnw7m
« Previous
Showing results 1 — 15 out of 6,031 results