A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2021; you can also visit the original URL.
The file type is application/pdf
.
Filters
Inference Attacks Against Graph Neural Networks
[article]
2021
arXiv
pre-print
In this paper, we systematically investigate the information leakage of the graph embedding by mounting three inference attacks. ...
We further propose an effective defense mechanism based on graph embedding perturbation to mitigate the inference attacks without noticeable performance degradation for graph classification tasks. ...
Graph Neural Network Many important real-world datasets are in the form of graphs, e.g., social networks [41] , financial networks [30] , and chemical networks [27] . ...
arXiv:2110.02631v1
fatcat:s6j7wgiganh5vkwohxxasalsdu
Node-Level Membership Inference Attacks Against Graph Neural Networks
[article]
2021
arXiv
pre-print
To fully utilize the information contained in graph data, a new family of machine learning (ML) models, namely graph neural networks (GNNs), has been introduced. ...
In this paper, we fill the gap by performing the first comprehensive analysis of node-level membership inference attacks against GNNs. ...
[33] propose a model stealing attack based on reinforcement learning, which relaxes assumptions on dataset and model architecture.
Adversarial Attacks Against Graph Neural Networks. ...
arXiv:2102.05429v1
fatcat:45kowsfzrbdovilnm2qpywtyhi
Attacking Graph-based Classification via Manipulating the Graph Structure
[article]
2019
arXiv
pre-print
We evaluate our attacks and compare them with a recent attack designed for graph neural networks. ...
Evading a graph-based classification method enables an attacker to evade detection in security analytics and can be used as a privacy defense against inference attacks. ...
Moreover, our results on two real-world graphs show that our attacks can evade Sybil detection in social networks and defend against graph-based attribute inference attacks. ...
arXiv:1903.00553v2
fatcat:fkkvxas3andhpknrqrx4bhpthm
Guest Editorial Introduction to the Special Section on Scalability and Privacy in Social Networks
2020
IEEE Transactions on Network Science and Engineering
In "Image and Attribute Based Convolutional Neural Network Inference Attacks in Social Networks," Mei et al. propose a new framework to launch an inference attack against social networks, which is based ...
Besides, they also show the detailed configuration of Fully Connected Neural Networks (FCNNs) for inference attacks. ...
doi:10.1109/tnse.2019.2959674
fatcat:pm5hzj4pczddhmjlucpxhk75ey
10 Security and Privacy Problems in Self-Supervised Learning
[article]
2021
arXiv
pre-print
graph neural networks [29] . ...
[29] showed that an attacker can infer whether there exists an edge between two nodes in a graph via black-box access to a graph neural network trained on the graph. ...
arXiv:2110.15444v2
fatcat:mroo7j7dhvgf5cymtugmmulsxa
Survey of Attacks and Defenses on Edge-Deployed Neural Networks
[article]
2019
arXiv
pre-print
In this work, we cover the landscape of attacks on, and defenses, of neural networks deployed in edge devices and provide a taxonomy of attacks and defenses targeting edge DNNs. ...
Furthermore, neural networks are vulnerable to adversarial attacks, which may cause misclassifications and violate the integrity of the output. ...
Defending against membership inference attacks has been explored in several works. ...
arXiv:1911.11932v1
fatcat:zihiqvq2tbd3zpuyvwqrrf5itq
Uncertainty-Matching Graph Neural Networks to Defend Against Poisoning Attacks
[article]
2020
arXiv
pre-print
Graph Neural Networks (GNNs), a generalization of neural networks to graph-structured data, are often implemented using message passes between entities of a graph. ...
Interestingly, this uncoupling makes UM-GNN immune to evasion attacks by design, and achieves significantly improved robustness against poisoning attacks. ...
Problem Setup In this paper, we are interested in building graph neural networks that are robust to adversarial attacks on the graph structure. ...
arXiv:2009.14455v1
fatcat:iifp2dxh55fldidvejsti5sckm
Quantifying (Hyper) Parameter Leakage in Machine Learning
[article]
2020
arXiv
pre-print
Specifically, we use Bayesian Networks to capture uncertainty in estimating the target model under various extraction attacks based on the subjective notion of probability. ...
This provides a practical tool to infer actionable details about extracting blackbox models and help identify the best attack combination which maximises the knowledge extracted (or information leaked) ...
execution of Neural Networks. ...
arXiv:1910.14409v2
fatcat:kz2e5l37pbcj3gv7au65kndtjy
Table of Contents
2022
IEEE Transactions on Industrial Informatics
Liu 467 Practical Membership Inference Attack Against Collaborative Inference in Industrial IoT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ...
Li 437 Multilevel Attention Based U-Shape Graph Neural Network for Point Clouds Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ...
doi:10.1109/tii.2021.3113150
fatcat:h3dbl4itlrcophunkw4jstg44y
Quantifying Privacy Leakage in Graph Embedding
[article]
2021
arXiv
pre-print
For the first time, we quantify the privacy leakage in graph embeddings through three inference attacks targeting Graph Neural Networks. ...
We propose a membership inference attack to infer whether a graph node corresponding to individual user's data was member of the model's training or not. ...
Traditional Deep Neural Networks fail to capture the nuances of structured data but a specific class of algorithms, namely, Graph Neural Networks (GNNs) have shown state of the art performance on such ...
arXiv:2010.00906v2
fatcat:hqtdvzxncnbmpdx5sznqafecnu
Quantifying Privacy Leakage in Graph Embedding
2020
MobiQuitous 2020 - 17th EAI International Conference on Mobile and Ubiquitous Systems: Computing, Networking and Services
For the first time, we quantify the privacy leakage in graph embeddings through three inference attacks targeting Graph Neural Networks. ...
We propose a membership inference attack to infer whether a graph node corresponding to an individual user's data was a member of the model's training or not. ...
Traditional Deep Neural Networks fail to capture the nuances of structured data but a specific class of algorithms, namely, Graph Neural Networks (GNNs) have shown state of the art performance on such ...
doi:10.1145/3448891.3448939
fatcat:gvgughaumrhkzo2pkoyfymklrm
Stealing Links from Graph Neural Networks
[article]
2020
arXiv
pre-print
Recently, neural networks were extended to graph data, which are known as graph neural networks (GNNs). ...
Specifically, given a black-box access to a GNN model, our attacks can infer whether there exists a link between any pair of nodes in the graph used to train the model. ...
Graph Neural Networks Many important real-world datasets come in the form of graphs or networks, e.g., social networks, knowledge graph, and chemical networks. ...
arXiv:2005.02131v2
fatcat:7nmav57hbrf53kwphte4qiooa4
Topology Attack and Defense for Graph Neural Networks: An Optimization Perspective
[article]
2019
arXiv
pre-print
Graph neural networks (GNNs) which apply the deep neural networks to graph data have achieved significant performance for the task of semi-supervised node classification. ...
Our method yields higher robustness against both different gradient based and greedy attack methods without sacrificing classification accuracy on original graph. ...
Some recent attentions have been paid to the robustness of graph neural network. ...
arXiv:1906.04214v3
fatcat:rkawhcjq3ngnfa3hmh53uyixvi
LPGNet: Link Private Graph Networks for Node Classification
[article]
2022
arXiv
pre-print
Graph convolutional networks (GCNs) are one such widely studied neural network architecture that perform well on this task. ...
In this paper, we present a new neural network architecture called LPGNet for training on graphs with privacy-sensitive edges. ...
Attacks on graph neural networks. Attacks on graph neural networks to steal (infer) the edges is a recent phenomena. Duddu et al. ...
arXiv:2205.03105v1
fatcat:np4psn4nofbczpobr2cttqw5iu
Adapting Membership Inference Attacks to GNN for Graph Classification: Approaches and Implications
[article]
2021
arXiv
pre-print
Graph Neural Networks (GNNs) are widely adopted to analyse non-Euclidean data, such as chemical networks, brain networks, and social networks, modelling complex relationships and interdependency between ...
How to infer the membership of an entire graph record is yet to be explored. In this paper, we take the first step in MIA against GNNs for graph-level classification. ...
Among them, a family of algorithms, called Graph Neural Networks (GNNs), have achieved stateof-the-art performance by generalising neural networks for graphs [3] , [4] . ...
arXiv:2110.08760v1
fatcat:urjcdkek6bd2lgxkswpfofsprq
« Previous
Showing results 1 — 15 out of 8,506 results