Filters








2,878 Hits in 4.6 sec

Membership Inference Attack Susceptibility of Clinical Language Models [article]

Abhyuday Jagannatha, Bhanu Pratap Singh Rawat, Hong Yu
2021 arXiv   pre-print
We show that membership inference attacks on CLMs lead to non-trivial privacy leakages of up to 7%.  ...  We design and employ membership inference attacks to estimate the empirical privacy leaks for model architectures like BERT and GPT2.  ...  Samples For membership inference experiments in CLMs, the sample S consists of the input sentence and the constructed language modeling target.  ... 
arXiv:2104.08305v1 fatcat:a3n3hgwi6fd2jovuapcdha3gey

Quantifying Privacy Risks of Masked Language Models Using Membership Inference Attacks [article]

Fatemehsadat Mireshghallah, Kartik Goyal, Archit Uniyal, Taylor Berg-Kirkpatrick, Reza Shokri
2022 arXiv   pre-print
We show that masked language models are extremely susceptible to likelihood ratio membership inference attacks: Our empirical results, on models trained on medical notes, show that our attack improves  ...  Prior attempts at measuring leakage of MLMs via membership inference attacks have been inconclusive, implying the potential robustness of MLMs to privacy attacks.  ...  We evaluate our proposed attack on a suite of masked clinical language models, following (Lehman et al., 2021) .  ... 
arXiv:2203.03929v1 fatcat:4l2lyxt3bjes7ckh5c5rsevm6a

Are Clinical BERT Models Privacy Preserving? The Difficulty of Extracting Patient-Condition Associations

Thomas Vakili, Hercules Dalianis
2021 AAAI Fall Symposia  
This article explores whether BERT models trained on clinical data are susceptible to training data extraction attacks.  ...  Language models may be trained on data that contain personal information, such as clinical data. Such sensitive data must not leak for privacy reasons.  ...  Membership Inference Attacks Unmasking Pseudonymized Training Data If a language model M has been trained on a dataset D, then there is a risk that the model has memorized certain sensitive details.  ... 
dblp:conf/aaaifs/VakiliD21 fatcat:zxdosrh6i5gkbn4ysrolorph24

Making machine learning trustworthy

Birhanu Eshete
2021 Science  
x Adversary Output y Model stealing Model evasion Membership inference  ...  Research has made progress on detecting poisoning and adversarial inputs to limiting what an adversary may learn by just interacting with a model to limit the extent of model stealing or membership inference  ... 
doi:10.1126/science.abi5052 fatcat:qjnee5ile5ftbbdgwvkh65dima

Membership Inference Attacks on Machine Learning: A Survey [article]

Hongsheng Hu and Zoran Salcic and Lichao Sun and Gillian Dobbie and Philip S. Yu and Xuyun Zhang
2022 arXiv   pre-print
For example, via identifying the fact that a clinical record that has been used to train a model associated with a certain disease, an attacker can infer that the owner of the clinical record has the disease  ...  However, recent studies have shown that ML models are vulnerable to membership inference attacks (MIAs), which aim to infer whether a data record was used to train a target model or not.  ...  Unlike model extraction attacks targeting the ML model, the attacker of an attribute inference attack, property inference attack or membership inference attack focuses on inferring private information  ... 
arXiv:2103.07853v4 fatcat:fwdoonsgnfbhncxkgmra2xk32y

Towards Demystifying Membership Inference Attacks [article]

Stacey Truex, Ling Liu, Mehmet Emre Gursoy, Lei Yu, Wenqi Wei
2019 arXiv   pre-print
First, we provide a generalized formulation of the development of a black-box membership inference attack model.  ...  Membership inference attacks seek to infer membership of individual training instances of a model to which an adversary has black-box access through a machine learning-as-a-service API.  ...  Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the NSF and IISP  ... 
arXiv:1807.09173v2 fatcat:5vmtqv5glndphomymzpm2k2rpu

When Machine Unlearning Jeopardizes Privacy [article]

Min Chen and Zhikun Zhang and Tianhao Wang and Michael Backes and Mathias Humbert and Yang Zhang
2020 arXiv   pre-print
We propose a novel membership inference attack which leverages the different outputs of an ML model's two versions to infer whether the deleted sample is part of the training set.  ...  More importantly, we show that our attack in multiple cases outperforms the classical membership inference attack on the original ML model, which indicates that machine unlearning can have counterproductive  ...  Previous studies [54, 63] have shown that overfitted models are more susceptible to classical membership inference attacks, while well-generalized models are almost immune to them.  ... 
arXiv:2005.02205v1 fatcat:nejb2afuqfe4pcrv463zgfjbna

Survey: Leakage and Privacy at Inference Time [article]

Marija Jegorova, Chaitanya Kaul, Charlie Mayor, Alison Q. O'Neil, Alexander Weir, Roderick Murray-Smith, Sotirios A. Tsaftaris
2021 arXiv   pre-print
We focus on inference-time leakage, as the most likely scenario for publicly available models. We first discuss what leakage is in the context of different data, tasks, and model architectures.  ...  We provide a comprehensive survey of contemporary advances on several fronts, covering involuntary data leakage which is natural to ML models, potential malevolent leakage which is caused by privacy attacks  ...  Membership Inference Attacks ML models currently do not fall under GDPR protection.  ... 
arXiv:2107.01614v1 fatcat:76a724yzkjfvjisrokssl6assa

LOGAN: Membership Inference Attacks Against Generative Models [article]

Jamie Hayes, Luca Melis, George Danezis, Emiliano De Cristofaro
2018 arXiv   pre-print
In this paper, we present the first membership inference attacks against generative models: given a data point, the adversary determines whether or not it was used to train the model.  ...  We present attacks based on both white-box and black-box access to the target model, against several state-of-the-art generative models, over datasets of complex representations of faces (LFW), objects  ...  Membership Inference Attacks Against Generative Models In this section, we present our membership inference attacks against generative models.  ... 
arXiv:1705.07663v4 fatcat:amddvcw7i5gf5jeh6wzisgavd4

LOGAN: Membership Inference Attacks Against Generative Models

Jamie Hayes, Luca Melis, George Danezis, Emiliano De Cristofaro
2019 Proceedings on Privacy Enhancing Technologies  
In this paper, we present the first membership inference attacks against generative models: given a data point, the adversary determines whether or not it was used to train the model.  ...  We present attacks based on both white-box and black-box access to the target model, against several state-of-the-art generative models, over datasets of complex representations of faces (LFW), objects  ...  We present the first study of membership inference attacks on generative models; 2.  ... 
doi:10.2478/popets-2019-0008 dblp:journals/popets/HayesMDC19 fatcat:2x3xnelx3zf5tgkkufvgvubzzq

Smallest of Maximum to find α-predicate for DeterminingCattle Health Conditions

2020 International Journal of Advanced Trends in Computer Science and Engineering  
Then α-predicate is reprocessed by defuzzification of the Tsukamoto model and produces handling suggestions and information on the cattle's condition.  ...  This study tries to apply the fuzzy algorithm into an animal inspection expert system at the Cimanggu Animal Clinic.  ...  Inference System Fuzzy inference systems are also known as fuzzy rulebased systems, fuzzy associative memory, fuzzy models, or fuzzy controllers [19] .  ... 
doi:10.30534/ijatcse/2020/192952020 fatcat:pi3bz3eptzblti4olfhacpnavy

Towards Automatic Generation of Shareable Synthetic Clinical Notes Using Neural Language Models [article]

Oren Melamud, Chaitanya Shivade
2019 arXiv   pre-print
Experiments using neural language models yield notes whose utility is close to that of the real ones in some clinical NLP tasks, yet leave ample room for future improvements.  ...  De-identification methods attempt to address these concerns but were shown to be susceptible to adversarial attacks.  ...  One example to such well crafted attacks are the membership inference attacks proposed by Shokri et al. (2017) .  ... 
arXiv:1905.07002v2 fatcat:m4txx3e6uzalhpvtjzjepojthi

Active Data Pattern Extraction Attacks on Generative Language Models [article]

Bargav Jayaraman, Esha Ghosh, Huseyin Inan, Melissa Chase, Sambuddha Roy, Wei Dai
2022 arXiv   pre-print
With the wide availability of large pre-trained language model checkpoints, such as GPT-2 and BERT, the recent trend has been to fine-tune them on a downstream task to achieve the state-of-the-art performance  ...  We further analyse the privacy impact of specific components, e.g. the decoding strategy, pertained to this application through our attack settings.  ...  In Proceedings of the 2016 ACM Con-  ... 
arXiv:2207.10802v1 fatcat:eq4wfmrpbjeshnjxqedpzujpom

Measuring Utility and Privacy of Synthetic Genomic Data [article]

Bristena Oprisanu and Georgi Ganev and Emiliano De Cristofaro
2021 arXiv   pre-print
We then measure privacy through the lens of membership inference attacks, i.e., inferring whether a record was part of the training data.  ...  Moreover, while some combinations of datasets and models produce synthetic data with distributions close to the real data, there often are target data points that are vulnerable to membership inference  ...  Google Faculty Award on "Enabling Progress in Genomic Research via Privacy Preserving Data Sharing" and a grant from the NCSC and the Alan Turing Institute on "Evaluating Privacy-Preserving Generative Models  ... 
arXiv:2102.03314v2 fatcat:rho3sjwbjfeezd6e4avusn6byu

Attribute Inference Attack of Speech Emotion Recognition in Federated Learning Settings [article]

Tiantian Feng and Hanieh Hashemi and Rajat Hebbar and Murali Annavaram and Shrikanth S. Narayanan
2022 arXiv   pre-print
However, recent works have demonstrated that FL approaches are still vulnerable to various privacy attacks like reconstruction attacks and membership inference attacks.  ...  To assess the information leakage of SER systems trained using FL, we propose an attribute inference attack framework that infers sensitive attribute information of the clients from shared gradients or  ...  of privacy attacks, including membership inference attacks [12] and reconstruction attacks [13] , [14] .  ... 
arXiv:2112.13416v2 fatcat:nrw7l7r3p5ff3ac2n6akimqqoi
« Previous Showing results 1 — 15 out of 2,878 results