2,391 Hits in 4.3 sec

Membership Leakage in Label-Only Exposures [article]

Zheng Li, Yang Zhang
2021 arXiv   pre-print
In this paper, we propose decision-based membership inference attacks and demonstrate that label-only exposures are also vulnerable to membership leakage.  ...  However, these attacks can be easily mitigated if the model only exposes the predicted label, i.e., the final model decision.  ...  In general, our contributions can be summarized as the following: • We perform a systematic investigation on membership leakage in label-only exposures of ML models, and introduce decision-based membership  ... 
arXiv:2007.15528v3 fatcat:c3vlkyngrzbdtgsojboqgo6vye

Truth Serum: Poisoning Machine Learning Models to Reveal Their Secrets [article]

Florian Tramèr and Reza Shokri and Ayrton San Joaquin and Hoang Le and Matthew Jagielski and Sanghyun Hong and Nicholas Carlini
2022 arXiv   pre-print
Our attacks are effective across membership inference, attribute inference, and data extraction.  ...  Our results cast doubts on the relevance of cryptographic privacy guarantees in multiparty computation protocols for machine learning, if parties can arbitrarily select their share of training data.  ...  MEMBERSHIP INFERENCE ATTACKS Membership inference (MI) captures one of the most generic notions of privacy leakage in machine learning.  ... 
arXiv:2204.00032v1 fatcat:jw4py6obtnem3jxc5c2tals6py

An Analysis Of Protected Health Information Leakage In Deep-Learning Based De-Identification Algorithms [article]

Salman Seyedi, Li Xiong, Shamim Nemati, Gari D. Clifford
2021 arXiv   pre-print
Modern deep learning algorithms have the highest potential of such leakage due to complexity of the models.  ...  Furthermore, we used different attacks including membership inference attack method to attack the model.  ...  Even the membership inference attack did not improve the case for the leakage of Figure 2 : In this chart the ellipse represents data and rectangle indicates neural network model.  ... 
arXiv:2101.12099v2 fatcat:zfh2lrb3mzethffrhtvfnmww4u

A Defense Framework for Privacy Risks in Remote Machine Learning Service

Yang Bai, Yu Li, Mingchuang Xie, Mingyu Fan, Jiang Ming
2021 Security and Communication Networks  
Under this assumption, the privacy leakage risks from the curious server are neglected.  ...  In recent years, machine learning approaches have been widely adopted for many applications, including classification.  ...  parameters to achieve a balance between utility and privacy and also to evaluate the privacy leakage of training data at risk of exposure.  ... 
doi:10.1155/2021/9924684 fatcat:fqanrrvdcrf3feqomhdwkezxwy

TableGAN-MCA: Evaluating Membership Collisions of GAN-Synthesized Tabular Data Releasing [article]

Aoting Hu, Renjie Xie, Zhigang Lu, Aiqun Hu, Minhui Xue
2021 arXiv   pre-print
Different from those works focusing on discovering membership of a given data point, in this paper, we propose a novel Membership Collision Attack against GANs (TableGAN-MCA), which allows an adversary  ...  given only synthetic entries randomly sampled from a black-box generator to recover partial GAN training data.  ...  Minhui Xue was, in part, supported by the Australian Research Council (ARC) Discovery Project (DP210102670). Aiqun Hu and Minhui Xue are the corresponding authors of this paper.  ... 
arXiv:2107.13190v1 fatcat:5t3ucl76qbgdre66xey6jiewme

Survey: Leakage and Privacy at Inference Time [article]

Marija Jegorova, Chaitanya Kaul, Charlie Mayor, Alison Q. O'Neil, Alexander Weir, Roderick Murray-Smith, Sotirios A. Tsaftaris
2021 arXiv   pre-print
We focus on inference-time leakage, as the most likely scenario for publicly available models. We first discuss what leakage is in the context of different data, tasks, and model architectures.  ...  We provide a comprehensive survey of contemporary advances on several fronts, covering involuntary data leakage which is natural to ML models, potential malevolent leakage which is caused by privacy attacks  ...  Given only black-box access (output labels) to a model, the adversary can then infer the frequency of the sensitive feature in the dataset [92] .  ... 
arXiv:2107.01614v1 fatcat:76a724yzkjfvjisrokssl6assa

Towards Securing Machine Learning Models Against Membership Inference Attacks

Sana Ben Hamida, Hichem Mrabet, Sana Belguith, Adeeb Alhomoud, Abderrazak Jemai
2022 Computers Materials & Continua  
There is a need to secure ML models, especially in the training phase to preserve the privacy of the training datasets and to minimise the information leakage.  ...  In this paper, we introduce a countermeasure against membership inference attacks (MIA) on Conventional Neural Networks (CNN) based on dropout and L2 regularization.  ...  [30] , to assess privacy exposure relating to persons. In addition, Chen et al. [30] has evaluated DP uses and its efficiency as solution to MIA in genomic data.  ... 
doi:10.32604/cmc.2022.019709 fatcat:td6yvyri2jbybkqgizfx76e3ta

"Why do so?" – A Practical Perspective on Machine Learning Security [article]

Kathrin Grosse, Lukas Bieringer, Tarek Richard Besold, Battista Biggio, Katharina Krombholz
2022 arXiv   pre-print
On the organizational level, while we find no predictors for threat exposure in our sample, the amount of implement defenses depends on exposure to threats or expected likelihood to become a target.  ...  We also provide a detailed analysis of practitioners' replies on the relevance of individual machine learning attacks, unveiling complex concerns like unreliable decision making, business information leakage  ...  This work was supported by the Province of Upper Austria within the COMET program managed by FFG in the COMET S3AI module.  ... 
arXiv:2207.05164v1 fatcat:tm63kyi3s5b5vo4ranvfol7r4m

Privacy in Deep Learning: A Survey [article]

Fatemehsadat Mireshghallah, Mohammadkazem Taram, Praneeth Vepakomma, Abhishek Singh, Ramesh Raskar, Hadi Esmaeilzadeh
2020 arXiv   pre-print
The ever-growing advances of deep learning in many areas including vision, recommendation systems, natural language processing, etc., have led to the adoption of Deep Neural Networks (DNNs) in production  ...  In this survey, we review the privacy concerns brought by deep learning, and the mitigating techniques introduced to tackle these issues.  ...  Indirect (Inferred) Information Exposure As shown in figure 1, we categorize indirect attacks into 5 main groups of membership inference, model inversion, hyperparameter inference, parameter inference,  ... 
arXiv:2004.12254v5 fatcat:4w63htwzafhxxel2oq3z3pwwya

Deletion Inference, Reconstruction, and Compliance in Machine (Un)Learning [article]

Ji Gao, Sanjam Garg, Mohammad Mahmoody, Prashant Nalini Vasudevan
2022 arXiv   pre-print
In fact, the very act of deletion might make the deleted record more vulnerable to privacy attacks.  ...  However, privacy attacks could potentially become more devastating in this new setting, since an attacker could now access both the original model before deletion and the new model after the deletion.  ...  Such exposure, particularly in certain (e.g., medical/political) contexts could be a major concern.  ... 
arXiv:2202.03460v1 fatcat:rv7i5zkvkfbmtea57or45tzxsa

Mitigating Membership Inference Attacks by Self-Distillation Through a Novel Ensemble Architecture [article]

Xinyu Tang, Saeed Mahloujifar, Liwei Song, Virat Shejwalkar, Milad Nasr, Amir Houmansadr, Prateek Mittal
2021 arXiv   pre-print
Membership inference attacks are a key measure to evaluate privacy leakage in machine learning (ML) models.  ...  We use an adaptive inference strategy at test time: our ensemble architecture aggregates the outputs of only those models that did not contain the input sample in their training data.  ...  Membership leakage in label-only exposures. In Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security, 2021.  ... 
arXiv:2110.08324v1 fatcat:ptafarthlbejtebfffmbb6fghy

Revisiting Membership Inference Under Realistic Assumptions [article]

Bargav Jayaraman, Lingxiao Wang, Katherine Knipmeyer, Quanquan Gu, David Evans
2021 arXiv   pre-print
We study membership inference in settings where some of the assumptions typically used in previous research are relaxed.  ...  First, we consider skewed priors, to cover cases such as when only a small fraction of the candidate pool targeted by the adversary are actually members and develop a PPV-based metric suitable for this  ...  Choo et al. [2020] recently proposed a label-only membership inference attack which is similar to Merlin in the sense that they also use the model's behavior on neighboring points as part of a membership  ... 
arXiv:2005.10881v5 fatcat:nlt2xp7bured3beiv3jfka22ve

Deletion inference, reconstruction, and compliance in machine (un)learning

Ji Gao, Sanjam Garg, Mohammad Mahmoody, Prashant Nalini Vasudevan
2022 Proceedings on Privacy Enhancing Technologies  
In fact, the very act of deletion might make the deleted record more vulnerable to privacy attacks.  ...  However, privacy attacks could potentially become more devastating in this new setting, since an attacker could now access both the original model before deletion and the new model after the deletion.  ...  Sanjam Garg was supported in part by DARPA under Agreement No. HR00112020026, AFOSR Award FA9550-19-1-0200, NSF CNS Award 1936826, and research grants by the Sloan Foundation, and Visa Inc.  ... 
doi:10.56553/popets-2022-0079 fatcat:bkwrfigrqvdbvl6yf6c7n4nbva

Machine Learning Based Cyber Attacks Targeting on Controlled Information: A Survey [article]

Yuantian Miao, Chao Chen, Lei Pan, Qing-Long Han, Jun Zhang, Yang Xiang
2021 arXiv   pre-print
Stealing attack against controlled information, along with the increasing number of information leakage incidents, has become an emerging cyber security threat in recent years.  ...  This survey presents the recent advances in this new type of attack and corresponding countermeasures.  ...  During the reconnaissance, adversaries can only obtain labels predicted by the target model with given inputs.  ... 
arXiv:2102.07969v1 fatcat:h4br22tpjre2lisc4zbzpy2iee

Research on Safety Technology of Chemicals Transportation and Storage

Lisha Sun
2017 Chemical Engineering Transactions  
In the present study, In allusion to the problem that there is not systematically study on China's hazardous chemical transportation safety and management situation currently, based on abundant statistic  ...  In allusion to the problem that the risk for dangerous. chemicals transport problems are difficult to quantify, combined with the traffic accident studied the dangerous chemicals risk assessment indexes  ...  Moreover, the minimum hazmat accident probability path and the least population exposure path are searched by adopting impedanceadjusting node labelling shortest path algorithm and link labelling shortest  ... 
doi:10.3303/cet1759196 doaj:f80802f5ee11421f90236f58a37f1ccb fatcat:meuzelspg5hk7k6w3eiarzi4o4
« Previous Showing results 1 — 15 out of 2,391 results