A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2021; you can also visit the original URL.
The file type is application/pdf
.
Filters
Trustworthy AI: A Computational Perspective
[article]
2021
arXiv
pre-print
In this work, we focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability ...
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems. ...
Graph Neural Networks (GNNs) have been developed for graph-structured data and can be used by many real-world systems, such as social networks and natural science. ...
arXiv:2107.06641v3
fatcat:ymqaxvzsoncqrcosj5mxcvgsuy
Link Prediction using Graph Neural Networks for Master Data Management
[article]
2020
arXiv
pre-print
Predicting links between people using Graph Neural Networks requires careful ethical and privacy considerations than in domains where GNNs have typically been applied so far. ...
We introduce novel methods for anonymizing data, model training, explainability and verification for Link Prediction in Master Data Management, and discuss our results. ...
Explainability Explainability methods in Graph Neural Networks tend to follow similar methods in text and images, namely identifying features that are most significant for the predictions. ...
arXiv:2003.04732v2
fatcat:qfak6f4265gerl7yvj36nbl444
Quantifying Privacy Leakage in Graph Embedding
[article]
2021
arXiv
pre-print
For the first time, we quantify the privacy leakage in graph embeddings through three inference attacks targeting Graph Neural Networks. ...
Graph embeddings have been proposed to map graph data to low dimensional space for downstream processing (e.g., node classification or link prediction). ...
The blackbox setting considers the specific case of downstream node classification task for convolution kernel based graph embedding with neural network. ...
arXiv:2010.00906v2
fatcat:hqtdvzxncnbmpdx5sznqafecnu
Quantifying Privacy Leakage in Graph Embedding
2020
MobiQuitous 2020 - 17th EAI International Conference on Mobile and Ubiquitous Systems: Computing, Networking and Services
For the first time, we quantify the privacy leakage in graph embeddings through three inference attacks targeting Graph Neural Networks. ...
Graph embeddings have been proposed to map graph data to low dimensional space for downstream processing (e.g., node classification or link prediction). ...
Deep Learning and more precisely Convolutional Neural Networks have shown tremendous performance over non-graph data such as images by capturing the spatial relation between pixels of an image and extracting ...
doi:10.1145/3448891.3448939
fatcat:gvgughaumrhkzo2pkoyfymklrm
Personalized Advertising Computational Techniques: A Systematic Literature Review, Findings, and a Design Framework
2021
Information
Finally, a design framework for personalized advertisement systems has been designed based on these findings. ...
are highlighted and are pinpointed to help and inspire researchers for future work. ...
and probability-based (Li and Lien, 2009) [82] Neural network (artificial neural network ANN), social graph-based (Qiu et al., 2009) [168] Weight-based and distance functions (sentiment analysis, tf-idf ...
doi:10.3390/info12110480
fatcat:53wmlsdlp5bmhffnv2fjst5vtq
TransMIA: Membership Inference Attacks Using Transfer Shadow Training
[article]
2021
arXiv
pre-print
In this paper, we propose TransMIA (Transfer learning-based Membership Inference Attacks), which use transfer learning to perform membership inference attacks on the source model when the adversary is ...
However, no prior work has pointed out that transfer learning can strengthen privacy attacks on machine learning models. ...
Acknowledgment This work was supported by JSPS KAKENHI Grant Number JP19H04113, and by ERATO HASUO Metamathematics for Systems Design Project (No. JPMJER1603), JST. ...
arXiv:2011.14661v3
fatcat:kgfsrc7k6jhy5hm35jcracvdp4
Trustworthy Graph Neural Networks: Aspects, Methods and Trends
[article]
2022
arXiv
pre-print
Graph neural networks (GNNs) have emerged as a series of competent graph learning methods for diverse real-world scenarios, ranging from daily applications like recommendation systems and question answering ...
In this survey, we introduce basic concepts and comprehensively summarise existing efforts for trustworthy GNNs from six aspects, including robustness, explainability, privacy, fairness, accountability ...
In this survey, the former kind of GNNs are called interpretable graph neural networks, and the latter kind of methods for explainability of GNNs are called explainers for graph neural networks. ...
arXiv:2205.07424v1
fatcat:f3iul7soqvgzbgaeqw7nhypbju
2020 Index IEEE Transactions on Knowledge and Data Engineering Vol. 32
2021
IEEE Transactions on Knowledge and Data Engineering
., +,
TKDE July 2020 1378-1392
Directed graphs
DAG: A General Model for Privacy-Preserving Data Mining. ...
., +, TKDE Jan. 2020 188-202 Convolutional neural nets Flow Prediction in Spatio-Temporal Networks Based on Multitask Deep Learning. ...
doi:10.1109/tkde.2020.3038549
fatcat:75f5fmdrpjcwrasjylewyivtmu
On the Privacy Risks of Model Explanations
[article]
2021
arXiv
pre-print
We extensively evaluate membership inference attacks based on feature-based model explanations, over a variety of datasets. ...
We investigate the privacy risks of feature-based model explanations using membership inference attacks: quantifying how much model predictions plus their explanations leak information about the presence ...
This is based on the method of Shokri et al. [35] , who formulate membership inference as a learning problem for the attacker and train a neural network to predict membership. ...
arXiv:1907.00164v6
fatcat:jn7ju6gzpvevffguye2sctr2iu
A Survey of Trustworthy Graph Learning: Reliability, Explainability, and Privacy Protection
[article]
2022
arXiv
pre-print
to robustness, explainability, and privacy. ...
In this survey, we provide a comprehensive review of recent leading approaches in the TwGL field from three dimensions, namely, reliability, explainability, and privacy protection. ...
Trustworthy Graph Learning Accuracy Reliability Explainability Privacy Protection Recent few years have seen deep graph learning (DGL) based on graph neural networks (GNNs) making remarkable progress in ...
arXiv:2205.10014v2
fatcat:aobv34rwg5ehpka4fsuar2gm7i
LPGNet: Link Private Graph Networks for Node Classification
[article]
2022
arXiv
pre-print
In this paper, we present a new neural network architecture called LPGNet for training on graphs with privacy-sensitive edges. ...
Deep neural networks are increasingly being used for node classification on graphs, wherein nodes with similar features have to be given the same label. ...
INTRODUCTION Graph neural networks (GNN) learn node representations from complex graphs similar to how convolutional neural networks do from grid-like images. ...
arXiv:2205.03105v1
fatcat:np4psn4nofbczpobr2cttqw5iu
Secure Image Inference using Pairwise Activation Functions
2021
IEEE Access
Polynomial approximation has for the past few years been used to derive polynomials as an approximation to activation functions for use in image prediction or inference employing homomorphic encryption ...
INDEX TERMS Exploratory analysis, homomorphic encryption scheme, homomorphic image inference, pairwise functions, polynomial approximation, privacy-preserving machine learning. ...
the neural network for image classification. ...
doi:10.1109/access.2021.3106888
fatcat:5jqu6yjkl5hb3l2igp5e6a3nim
Adversary for Social Good: Protecting Familial Privacy through Joint Adversarial Attacks
2020
PROCEEDINGS OF THE THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE AND THE TWENTY-EIGHTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE
For example, implicit social relation such as family information may be simply exposed by network structure and hosted face images through off-the-shelf graph neural networks (GNN), which will be empirically ...
Second, to protect family privacy on social networks, we propose a novel adversarial attack algorithm that produces both adversarial features and graph under a given budget. ...
what additional information will be inferred from the social networks. ...
doi:10.1609/aaai.v34i07.6791
fatcat:lspxwtunjfhpjjov2vao63rbgu
Membership Inference Attack on Graph Neural Networks
[article]
2021
arXiv
pre-print
Graph Neural Networks (GNNs), which generalize traditional deep neural networks on graph data, have achieved state-of-the-art performance on several graph analytical tasks. ...
We introduce two realistic settings for performing a membership inference (MI) attack on GNNs. ...
We will publish our code at the time of publication. 2 Background and Related Works
Graph Neural Networks Graph Neural Networks popularized by graph convolutional networks (GCNs) and their variants, ...
arXiv:2101.06570v3
fatcat:czknpvcdsvdwtkdlcrbk37dvdm
Cross-domain fault localization: A case for a graph digest approach
2008
2008 IEEE Internet Network Management Workshop (INM)
We present an inference-graph-digest based formulation of the problem. ...
The formulation not only explicitly models the inference accuracy and privacy requirements for discussing and reasoning over cross-domain problems, but also facilitates the re-use of existing fault localization ...
ACKNOWLEDGMENTS We thank the anonymous reviewers and our shepherd Keisuke Ishibashi for their constructive comments. Srikanth Kandula provided valuable input for our background research. ...
doi:10.1109/inetmw.2008.4660328
fatcat:4xjuzwoqfrfv7l7hiit5z2vhiy
« Previous
Showing results 1 — 15 out of 4,239 results