Filters








667 Hits in 8.0 sec

Invert and Defend: Model-based Approximate Inversion of Generative Adversarial Networks for Secure Inference [article]

Wei-An Lin and Yogesh Balaji and Pouya Samangouei and Rama Chellappa
2019 arXiv   pre-print
Under mild assumptions, we theoretically show that using InvGAN, we can approximately invert the generations of any latent code of a trained GAN model.  ...  In this paper, we propose InvGAN - a novel framework for solving the inference problem in GANs, which involves training an encoder network capable of inverting a pre-trained generator network without access  ...  In summary, the main contributions of this work are as follows: 1 A model-based and data-free approach for approximately inverting pre-trained generator networks.  ... 
arXiv:1911.10291v1 fatcat:luysb425gzeqnfppuinzogkbeu

Defending Neural Backdoors via Generative Distribution Modeling [article]

Ximing Qiao, Yukun Yang, Hai Li
2019 arXiv   pre-print
In this work, we propose max-entropy staircase approximator (MESA), an algorithm for high-dimensional sampling-free generative modeling and use it to recover the trigger distribution.  ...  Neural backdoor attack is emerging as a severe security threat to deep learning, while the capability of existing defense methods is limited, especially for complex backdoor triggers.  ...  Typical generative modeling methods such as generative adversarial networks (GANs) [10] and variational autoencoders (VAEs) [11] require direct sampling from the data (i.e., triggers) distribution,  ... 
arXiv:1910.04749v2 fatcat:s2gq56l4hvddxbjni6mdth5f4i

One Parameter Defense – Defending against Data Inference Attacks via Differential Privacy [article]

Dayong Ye and Sheng Shen and Tianqing Zhu and Bo Liu and Wanlei Zhou
2022 arXiv   pre-print
Machine learning models are vulnerable to data inference attacks, such as membership inference and model inversion attacks.  ...  The experimental results show the method to be an effective and timely defense against both membership inference and model inversion attacks with no reduction in accuracy.  ...  We also much appreciate the PhD candidate, Shuai Zhou, for his experimental support.  ... 
arXiv:2203.06580v1 fatcat:5iob4gctlrfvjcks2r7ntexyvu

The Devil is in the GAN: Defending Deep Generative Models Against Backdoor Attacks [article]

Ambrish Rawat, Killian Levacher, Mathieu Sinn
2021 arXiv   pre-print
We show its effectiveness for a variety of DGM architectures (Generative Adversarial Networks (GANs), Variational Autoencoders (VAEs)) and data domains (images, audio).  ...  We also investigate the effectiveness of different defensive approaches (based on static/dynamic model and output inspections) and prescribe a practical defense strategy that paves the way for safe usage  ...  Acknowledgements This project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 951911. References  ... 
arXiv:2108.01644v1 fatcat:6thc7wxadfhu3nw3uaara375d4

Adversarially Robust Classification by Conditional Generative Model Inversion [article]

Mitra Alirezaei, Tolga Tasdizen
2022 arXiv   pre-print
While the method is related to Defense-GAN, the use of a conditional generative model and inversion in our model instead of the feed-forward classifier is a critical difference.  ...  These methods are successful in defending against gradient-based attacks; however, they are easily circumvented by attacks which either do not use the gradient or by attacks which approximate and use the  ...  We invert a conditional generator to classify images. First, a conditional generative adversarial network (cGAN) [20] is trained to model the distribution of unperturbed images.  ... 
arXiv:2201.04733v1 fatcat:pgx2b6wqbzfiznggkm636xbz6a

Deep Neural Networks are Surprisingly Reversible: A Baseline for Zero-Shot Inversion [article]

Xin Dong, Hongxu Yin, Jose M. Alvarez, Jan Kautz, Pavlo Molchanov
2021 arXiv   pre-print
Understanding the behavior and vulnerability of pre-trained deep neural networks (DNNs) can help to improve them.  ...  The crux of our method is to inverse the DNN in a divide-and-conquer manner while re-syncing the inverted layers via cycle-consistency guidance with the help of synthesized data.  ...  Inverting ResNet-18. We start with ResNet-18 inversion. For this experiment we base the network on the implementation and pre-trained model of ResNet-18 from the PyTorch model zoo [97] .  ... 
arXiv:2107.06304v1 fatcat:ohamubvcjffxdlbe7sbioxpu2y

On the Security Privacy in Federated Learning [article]

Gorka Abad, Stjepan Picek, Víctor Julio Ramírez-Durán, Aitor Urbieta
2022 arXiv   pre-print
This work assesses the CIA of FL by reviewing the state-of-the-art (SoTA) and creating a threat model that embraces the attack's surface, adversarial actors, capabilities, and goals.  ...  Recent privacy awareness initiatives such as the EU General Data Protection Regulation subdued Machine Learning (ML) to privacy and security assessments.  ...  Other approaches [161, 162] leveraged Generative Adversarial Networks (GANs) [42] for generating poisoned data.  ... 
arXiv:2112.05423v2 fatcat:qcovp2cz2rfgbcvx6mtx5xighe

Adversarial Machine Learning for Cybersecurity and Computer Vision: Current Developments and Challenges [article]

Bowei Xi
2021 arXiv   pre-print
For example, deep neural networks fail to correctly classify adversarial images, which are generated by adding imperceptible perturbations to clean images.We first discuss three main categories of attacks  ...  We provide a comprehensive overview of adversarial machine learning focusing on two application domains, i.e., cybersecurity and computer vision.  ...  (Weng et al., 2018) proposed a robustness metric for DNN based on It is an ongoing effort to secure machine learning models.  ... 
arXiv:2107.02894v1 fatcat:ir7vzxh3wfaddcmgezqtyxu7iy

Confidential Inference via Ternary Model Partitioning [article]

Zhongshu Gu, Heqing Huang, Jialong Zhang, Dong Su, Hani Jamjoom, Ankita Lamba, Dimitrios Pendarakis, Ian Molloy
2020 arXiv   pre-print
, an enclave-based model serving system for online confidential inference in the cloud.  ...  We have conducted comprehensive security and performance evaluation on three representative ImageNet-level deep learning models with different network depths and architectural complexity.  ...  The pairs can be further utilized to approximate a surrogate inverse model ϕ −1 .  ... 
arXiv:1807.00969v3 fatcat:y5fxdsexh5dwdklaqg62gj5hxy

Security and Privacy Issues in Deep Learning [article]

Ho Bae, Jaehee Jang, Dahuin Jung, Hyemi Jang, Heonseok Ha, Hyungyu Lee, Sungroh Yoon
2021 arXiv   pre-print
To promote secure and private artificial intelligence (SPAI), we review studies on the model security and data privacy of DNNs.  ...  Security attacks can be divided based on when they occur: if an attack occurs during training, it is known as a poisoning attack, and if it occurs during inference (after training) it is termed an evasion  ...  They modeled that difference by a generative adversarial networks.  ... 
arXiv:1807.11655v4 fatcat:k7mizsqgrfhltktu6pf5htlmy4

Using Non-invertible Data Transformations to Build Adversarial-Robust Neural Networks [article]

Qinglong Wang, Wenbo Guo, Alexander G. Ororbia II, Xinyu Xing, Lin Lin, C. Lee Giles, Xue Liu, Peng Liu, Gang Xiong
2016 arXiv   pre-print
More importantly, we present a unifying framework for protecting deep neural models using a non-invertible data transformation--developing two adversary-resilient architectures utilizing both linear and  ...  However, despite their superior performance in many applications, these models have been recently shown to be susceptible to a particular type of attack possible through the generation of particular synthetic  ...  Following the inversion procedure of Section V, this adversary can further reconstruct adversarial samples based on these lower dimensional adversarial mappings.  ... 
arXiv:1610.01934v5 fatcat:w6iydvy4n5fotoe6eordckwynu

Improving Robustness to Model Inversion Attacks via Mutual Information Regularization [article]

Tianhao Wang, Yuheng Zhang, Ruoxi Jia
2020 arXiv   pre-print
Our defense principle is model-agnostic and we present tractable approximations to the regularizer for linear regression, decision trees, and neural networks, which have been successfully attacked by prior  ...  Our experiments demonstrate that MID leads to state-of-the-art performance for a variety of MI attacks, target models and datasets.  ...  Conclusion We propose a defense against MI attacks based on regularizing the mutual information between the model input and prediction and further present tractable approximations to the regularizer for  ... 
arXiv:2009.05241v2 fatcat:4jsnu7g6xfgflfmvga77iysyny

MimicGAN: Corruption-Mimicking for Blind Image Recovery & Adversarial Defense [article]

Rushil Anirudh, Jayaraman J. Thiagarajan, Bhavya Kailkhura, Timo Bremer
2018 arXiv   pre-print
We present MimicGAN, an unsupervised technique to solve general inverse problems based on image priors in the form of generative adversarial networks (GANs).  ...  We also demonstrate that MimicGAN improves upon recent GAN-based defenses against adversarial attacks and represents one of the strongest test-time defenses available today.  ...  In particular, we consider the important problems of blind image recovery and defending against adversarial attacks.  ... 
arXiv:1811.08484v1 fatcat:7yxefwgjh5dj7pybdfzmj2unji

Reaching Data Confidentiality and Model Accountability on the CalTrain [article]

Zhongshu Gu, Hani Jamjoom, Dong Su, Heqing Huang, Jialong Zhang, Tengfei Ma, Dimitrios Pendarakis, Ian Molloy
2018 arXiv   pre-print
To support building accountable learning models, we securely maintain the links between training instances and their corresponding contributors.  ...  In this paper, we introduce CALTRAIN, a Trusted Execution Environment (TEE) based centralized multi-party collaborative learning system that simultaneously achieves data confidentiality and model accountability  ...  We have observed some recent research efforts (such as Model Inversion Attack [29] , Membership Inference Attack [30] , and Generative Adversarial Network (GAN) Attack [31] ) to infer or approximate  ... 
arXiv:1812.03230v1 fatcat:boywhcunwfcybj6ze2dapfhbgq

Reducing Risk of Model Inversion Using Privacy-Guided Training [article]

Abigail Goldsteen, Gilad Ezov, Ariel Farkash
2020 arXiv   pre-print
We present a solution for countering model inversion attacks in tree-based models, by reducing the influence of sensitive features in these models.  ...  Our evaluation confirms that training models in this manner reduces the risk of inference for those features, as demonstrated through several black-box and white-box attacks.  ...  More recently, a generative adversarial network (GAN) based black-box model inversion attack has been shown to be effective against deep-learning models such as convolutional neural networks (Avodji et  ... 
arXiv:2006.15877v1 fatcat:zxykftgcjrgo3lfwmlsoszkici
« Previous Showing results 1 — 15 out of 667 results