Filters








335 Hits in 7.3 sec

Defend Deep Neural Networks Against Adversarial Examples via Fixed and Dynamic Quantized Activation Functions [article]

Adnan Siraj Rakin, Jinfeng Yi, Boqing Gong, Deliang Fan
2019 arXiv   pre-print
Recent studies have shown that deep neural networks (DNNs) are vulnerable to adversarial attacks.  ...  To the best of our knowledge, this is the first work that uses quantization of activation functions to defend against adversarial examples.  ...  We use Zeroth Order Stochastic Coordinate Descent with Coordinate-wise ADAM.  ... 
arXiv:1807.06714v2 fatcat:4kv3wl5rmvg3dccff6ly2pnvz4

Towards Evaluating the Robustness of Deep Diagnostic Models by Adversarial Attack

Mengting Xu, Tao Zhang, Zhongnian Li, Mingxia Liu, Daoqiang Zhang
2021 Medical Image Analysis  
Deep learning models (with neural networks) have been widely used in challenging tasks such as computer-aided disease diagnosis based on medical images.  ...  labels for both original/clean images and those adversarial ones.  ...  We then review recent adversarial attack and defense methods on nature and medical images.  ... 
doi:10.1016/j.media.2021.101977 pmid:33550005 fatcat:dyyp4d24hvduto4gknjufups7e

Privacy and Security Issues in Deep Learning: A Survey

Ximeng Liu, Lehui Xie, Yaopeng Wang, Jian Zou, Jinbo Xiong, Zuobin Ying, Athanasios V. Vasilakos
2020 IEEE Access  
Deep Learning (DL) algorithms based on artificial neural networks have achieved remarkable success and are being extensively applied in a variety of application domains, ranging from image classification  ...  INDEX TERMS Deep learning, DL privacy, DL security, model extraction attack, model inversion attack, adversarial attack, poisoning attack, adversarial defense, privacy-preserving.  ...  [177] proposed a defense mechanism, deep image restoration networks, to defend against a wide range of recently proposed adversarial attacks.  ... 
doi:10.1109/access.2020.3045078 fatcat:kbpqgmbg4raerc6txivacpgcia

Improved and Interpretable Defense to Transferred Adversarial Examples by Jacobian Norm with Selective Input Gradient Regularization [article]

Deyin Liu, Lin Wu, Lingqiao Liu, Haifeng Zhao, Farid Boussaid, Mohammed Bennamoun
2022 arXiv   pre-print
As such, we achieve both the improved defense and high interpretability of DNNs. Finally, we evaluate our method across different architectures against powerful adversarial attacks.  ...  Experiments demonstrate that the proposed J-SIGR confers improved robustness against transferred adversarial attacks, and we also show that the predictions from the neural network are easy to interpret  ...  Attacks We consider two widely adopted gradient-based attacks and one Jacobian-based attack. a) Fast Gradient Sign Method (FGSM) [2] : This method can generate adversarial examples by perturbing the inputs  ... 
arXiv:2207.13036v3 fatcat:yb4hbi22nzfmbnb2vvvfs7co5a

Adversarial Examples in Modern Machine Learning: A Review [article]

Rey Reza Wiyatno, Anqi Xu, Ousmane Dia, Archy de Berker
2019 arXiv   pre-print
We explore a variety of adversarial attack methods that apply to image-space content, real world adversarial attacks, adversarial defenses, and the transferability property of adversarial examples.  ...  We also discuss strengths and weaknesses of various methods of adversarial attack and defense.  ...  [50] proposed a defense mechanism based on stochastic policy called Stochastic Activation Pruning (SAP) which randomly prunes or drops some activations (i.e., set to 0) during test time.  ... 
arXiv:1911.05268v2 fatcat:majzak4sqbhcpeahghh6sm3dwq

Towards Efficient and Effective Adversarial Training

Gaurang Sriramanan, Sravanti Addepalli, Arya Baburaj, Venkatesh Babu R.
2021 Neural Information Processing Systems  
The vulnerability of Deep Neural Networks to adversarial attacks has spurred immense interest towards improving their robustness.  ...  While the recent single-step defenses show promising direction, their robustness is not on par with multi-step training methods.  ...  to solutions obtained using traditional Stochastic Gradient Descent (SGD).  ... 
dblp:conf/nips/SriramananABR21 fatcat:hotsjfe4fneezpe3tozhak2dai

Towards Robust Neural Networks via Random Self-ensemble [article]

Xuanqing Liu, Minhao Cheng, Huan Zhang, Cho-Jui Hsieh
2018 arXiv   pre-print
We show that our algorithm is equivalent to ensemble an infinite number of noisy models f_ϵ without any additional memory overhead, and the proposed training procedure based on noisy stochastic gradient  ...  For instance, on CIFAR-10 with VGG network (which has 92% accuracy without any attack), under the strong C&W attack within a certain distortion tolerance, the accuracy of unprotected model drops to less  ...  Conclusion In this paper, we propose a new defense algorithm called Random Self-Ensemble (RSE) to improve the robustness of deep neural networks against adversarial attacks.  ... 
arXiv:1712.00673v2 fatcat:4moeg4v47jbrtkjjz3wobg7cma

Protecting JPEG Images Against Adversarial Attacks [article]

Aaditya Prakash, Nick Moran, Solomon Garber, Antonella DiLillo, James Storer
2018 arXiv   pre-print
These adversarial attacks make imperceptible modifications to an image that fool DNN classifiers. We present an adaptive JPEG encoder which defends against many of these attacks.  ...  As deep neural networks (DNNs) have been integrated into critical systems, several methods to attack these systems have been developed.  ...  The parameters of the model, θ, are learned by stochastic gradient descent (SGD).  ... 
arXiv:1803.00940v1 fatcat:l7webp2od5albgsqkitrovxv4i

Careful What You Wish For: on the Extraction of Adversarially Trained Models [article]

Kacem Khaled, Gabriela Nicolescu, Felipe Gohring de Magalhães
2022 arXiv   pre-print
Recent attacks on Machine Learning (ML) models such as evasion attacks with adversarial examples and models stealing through extraction attacks pose several security and privacy threats.  ...  In this paper, we propose a framework to assess extraction attacks on adversarially trained models with vision datasets.  ...  on the negative loss function. x adv = x + εsign(∇ x L(θ, x, y)) (1) The PGD attack contains T gradient descent steps. x t+1 = Π x+S (x t + εsign(∇ x L(θ, x, y))) (2) Equation 2 summarizes this technique  ... 
arXiv:2207.10561v1 fatcat:oo4afkd6knb6vnmvjmphdhxgbe

Face Recognition System Against Adversarial Attack Using Convolutional Neural Network

Ansam Kadhim, Salah Al-Darraji
2021 Iraqi Journal for Electrical And Electronic Engineering  
This paper proposes a method to protect the face recognition system against these attacks by distorting images through different attacks, then training the recognition deep network model, specifically  ...  Various attacks that exploit this weakness affect the face recognition systems such as Fast Gradient Sign Method (FGSM), Deep Fool, and Projected Gradient Descent (PGD).  ...  Hence, PGD is related to the gradient-sign method which considers a strong attack. So, the force against adversarial attacks can be increased by using defensive distillation.  ... 
doi:10.37917/ijeee.18.1.1 fatcat:i25uqmyr2jbrfomhzfu5tta4t4

Security and Privacy Issues in Deep Learning [article]

Ho Bae, Jaehee Jang, Dahuin Jung, Hyemi Jang, Heonseok Ha, Hyungyu Lee, Sungroh Yoon
2021 arXiv   pre-print
Security attacks can be divided based on when they occur: if an attack occurs during training, it is known as a poisoning attack, and if it occurs during inference (after training) it is termed an evasion  ...  Defenses proposed against such attacks include techniques to recognize and remove malicious data, train a model to be insensitive to such data, and mask the model's structure and parameters to render attacks  ...  With this, it was first demonstrated a defense method against AE on two-layer networks.  ... 
arXiv:1807.11655v4 fatcat:k7mizsqgrfhltktu6pf5htlmy4

Adversarial Defense of Image Classification Using a Variational Auto-Encoder [article]

Yi Luo, Henry Pfister
2018 arXiv   pre-print
This paper uses a variational auto-encoder (VAE) to defend against adversarial attacks for image classification tasks.  ...  Deep neural networks are known to be vulnerable to adversarial attacks. This exposes them to potential exploits in security-sensitive applications and highlights their lack of robustness.  ...  In this paper, we mainly focus on the following attack methods: Fast Gradient Sign Method (FGSM) [9] : FGSM is a gradient-based single-step attack method.  ... 
arXiv:1812.02891v1 fatcat:mczl7x6bkfayjiuxf3ttdcre3a

On the role of deep learning model complexity in adversarial robustness for medical images

David Rodriguez, Tapsya Nayak, Yidong Chen, Ram Krishnan, Yufei Huang
2022 BMC Medical Informatics and Decision Making  
Background Deep learning (DL) models are highly vulnerable to adversarial attacks for medical image classification.  ...  On the other hand, we also show that once those models undergo adversarial training, the adversarial trained medical image DL models exhibit a greater degree of robustness than the standard trained models  ...  About this supplement This article has been published as part of BMC Medical Informatics and Decision Making Volume 22 Supplement 2, 2022: Selected articles from the International Conference on Intelligent  ... 
doi:10.1186/s12911-022-01891-w pmid:35725429 pmcid:PMC9208111 fatcat:6jctdwquvvepbe6abge7rgemae

A Survey on Adversarial Attacks for Malware Analysis [article]

Kshitiz Aryal, Maanak Gupta, Mahmoud Abdelsalam
2022 arXiv   pre-print
Work will provide a taxonomy of adversarial evasion attacks on the basis of attack domain and adversarial generation techniques.  ...  This survey aims at providing the encyclopedic introduction to adversarial attacks that are carried out against malware detection systems.  ...  The profound activity has not been limited to attack side but considering the threat posed to entire machine learning family, researchers have been equally active on defensive side as well.  ... 
arXiv:2111.08223v2 fatcat:fiw3pgunsvb2vo7uv72mp6b65a

A Defense Framework for Privacy Risks in Remote Machine Learning Service

Yang Bai, Yu Li, Mingchuang Xie, Mingyu Fan, Jiang Ming
2021 Security and Communication Networks  
In this work, we propose a generic privacy-preserving framework based on the adversarial method to defend both the curious server and the malicious MLaaS user.  ...  exposed attacks.  ...  Adversarial Algorithms Generative Adversarial Networks-(GANs-) Based Method. GANs-based method generates adversarial examples with generative adversarial networks (GANs) [41] .  ... 
doi:10.1155/2021/9924684 fatcat:fqanrrvdcrf3feqomhdwkezxwy
« Previous Showing results 1 — 15 out of 335 results