Filters








14,465 Hits in 6.4 sec

Towards Deep Learning Models Resistant to Adversarial Attacks [article]

Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, Adrian Vladu
2019 arXiv   pre-print
We believe that robustness against such well-defined classes of adversaries is an important stepping stone towards fully resistant deep learning models.  ...  In fact, some of the latest findings suggest that the existence of adversarial attacks may be an inherent weakness of deep learning models.  ...  Acknowledgments We thank Wojciech Matusik for kindly providing us with computing resources to perform this work.  ... 
arXiv:1706.06083v4 fatcat:k3mb7lkgk5afrd5hjtoexqqabm

Investigating Resistance of Deep Learning-based IDS against Adversaries using min-max Optimization [article]

Rana Abou Khamis, Omair Shafiq, Ashraf Matrawy
2019 arXiv   pre-print
We study and measure the effectiveness of the adversarial attack methods as well as the resistance of the adversarially trained models against such attacks.  ...  With the growth of adversarial attacks against machine learning models, several concerns have emerged about potential vulnerabilities in designing deep neural network-based intrusion detection systems  ...  Toward this direction, a majority of IDSs enhance their capabilities by using neural networks (NN) towards deep learning.  ... 
arXiv:1910.14107v1 fatcat:evhwk4ismvazfkj565ocatciuy

Adversarial Attack and Defense in Deep Ranking [article]

Mo Zhou, Le Wang, Zhenxing Niu, Qilin Zhang, Nanning Zheng, Gang Hua
2021 arXiv   pre-print
Conversely, an anti-collapse triplet defense is proposed to improve the ranking model robustness against all proposed attacks, where the model learns to prevent the positive and negative samples being  ...  Deep Neural Network classifiers are vulnerable to adversarial attack, where an imperceptible perturbation could result in misclassification.  ...  In summary, deep ranking models are vulnerable to adversarial ranking attack.  ... 
arXiv:2106.03614v1 fatcat:xcgpsdyjujcaxi7ce26ze4mcoa

Evaluating Deep Learning for Image Classification in Adversarial Environment

Ye PENG, Wentao ZHAO, Wei CAI, Jinshu SU, Biao HAN, Qiang LIU
2020 IEICE transactions on information and systems  
is; how to evaluate the performance of deep learning models in adversarial environment, thus, to raise security advice such that the selected application system based on deep learning is resistant to  ...  After that, we conduct extensive experiments towards the performance of deep learning for image classification under different adversarial environments to validate the scalability of EDLIC.  ...  and (b) how to evaluate the performance of deep learning models in the adversarial environment, thus, to raise security advice such that the selected application system based on deep learning is resistant  ... 
doi:10.1587/transinf.2019edp7188 fatcat:4ftpnd4uznd7jo6mtdnp74vvhu

ResNets Ensemble via the Feynman-Kac Formalism to Improve Natural and Robust Accuracies [article]

Bao Wang and Binjie Yuan and Zuoqiang Shi and Stanley J. Osher
2019 arXiv   pre-print
Empirical adversarial risk minimization (EARM) is a widely used mathematical framework to robustly train deep neural nets (DNNs) that are resistant to adversarial attacks.  ...  Based on this unified viewpoint, we propose a simple yet effective ResNets ensemble algorithm to boost the accuracy of the robustly trained model on both clean and adversarial images.  ...  Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon.  ... 
arXiv:1811.10745v2 fatcat:kqsz67i665fmpepj35awfqt4de

Generalizability vs. Robustness: Adversarial Examples for Medical Imaging [article]

Magdalini Paschali, Sailesh Conjeti, Fernando Navarro, Nassir Navab
2018 arXiv   pre-print
To this end, we utilize adversarial examples, images that fool machine learning models, while looking imperceptibly different from original data, as a measure to evaluate the robustness of a variety of  ...  In this paper, for the first time, we propose an evaluation method for deep learning models that assesses the performance of a model not only in an unseen test scenario, but also in extreme cases of noise  ...  resistance of classification models to adversarial examples.  ... 
arXiv:1804.00504v1 fatcat:5nw44sx25nbopk2cdlb7u52she

Mitigating Deep Learning Vulnerabilities from Adversarial Examples Attack in the Cybersecurity Domain [article]

Chris Einar San Agustin
2019 arXiv   pre-print
Deep learning models are known to solve classification and regression problems by employing a number of epoch and training samples on a large dataset with optimal accuracy.  ...  For instance, (1) an adversarial attack on a self-driving car running a deep reinforcement learning system yields a direct misclassification on humans causing untoward accidents.(2) a self-driving vehicle  ...  It is architected to attack deep learning networks by the way the networks learn-gradients.  ... 
arXiv:1905.03517v1 fatcat:iajrfeqym5da3cc2qges3cwer4

PointBA: Towards Backdoor Attacks in 3D Point Cloud [article]

Xinke Li, Zhirui Chen, Yue Zhao, Zekun Tong, Yabang Zhao, Andrew Lim, Joey Tianyi Zhou
2021 arXiv   pre-print
Although most of them consider adversarial attacks, we identify that backdoor attack is indeed a more serious threat to 3D deep learning systems but remains unexplored.  ...  Our proposed backdoor attack in 3D point cloud is expected to perform as a baseline for improving the robustness of 3D deep models.  ...  Hongpeng Li for his assistance in data processing and model development.  ... 
arXiv:2103.16074v3 fatcat:ms3225sj3rfw7eoytxc64nqq6y

Recent Advances in Understanding Adversarial Robustness of Deep Neural Networks [article]

Tao Bai, Jinqi Luo, Jun Zhao
2020 arXiv   pre-print
It is increasingly important to obtain models with high robustness that are resistant to adversarial examples.  ...  Adversarial examples are inevitable on the road of pervasive applications of deep neural networks (DNN).  ...  introduced Accuracy-Robustness Pareto Frontier (ARPF) for deep vision models to evaluate robustness towards various adversarial attacks.  ... 
arXiv:2011.01539v1 fatcat:e3o47epftbc2rebpdx5yotzriy

Clean-Label Backdoor Attacks on Video Recognition Models [article]

Shihao Zhao, Xingjun Ma, Xiang Zheng, James Bailey, Jingjing Chen, Yu-Gang Jiang
2020 arXiv   pre-print
We propose the use of a universal adversarial trigger as the backdoor trigger to attack video recognition models, a situation where backdoor attacks are likely to be challenged by the above 4 strict conditions  ...  Deep neural networks (DNNs) are vulnerable to backdoor attacks which can hide backdoor triggers in DNNs by poisoning training data.  ...  [9] first investigated backdoor attacks in the deep learning pipeline, and proposed the Badnets attack.  ... 
arXiv:2003.03030v2 fatcat:5srznyhnavei3brvgkmrjl7ipm

Adversarial Test on Learnable Image Encryption [article]

MaungMaung AprilPyone, Warit Sirichotedumrong, Hitoshi Kiya
2019 arXiv   pre-print
Data for deep learning should be protected for privacy preserving. Researchers have come up with the notion of learnable image encryption to satisfy the requirement.  ...  However, existing privacy preserving approaches have never considered the threat of adversarial attacks.  ...  Since then, deep learning has got significant amount of attention towards adversarial robustness [12] .  ... 
arXiv:1907.13342v1 fatcat:evo34yzdwfcajn6omhepuewlzu

Classifiers Based on Deep Sparse Coding Architectures are Robust to Deep Learning Transferable Examples [article]

Jacob M. Springer, Charles S. Strauss, Austin M. Thresher, Edward Kim, Garrett T. Kenyon
2018 arXiv   pre-print
fool those same deep learning models.  ...  These attacks are exploitable in nearly all of the existing deep learning classification frameworks.  ...  This model, deep sparse coding (DSC) [6] , takes a novel, biologically-inspired approach to machine learning, and was immune to these transferable adversarial examples generated to attack other deep learning  ... 
arXiv:1811.07211v2 fatcat:qj6cxw2sybg2zdbyuzlyew5fou

Security Analysis and Enhancement of Model Compressed Deep Learning Systems under Adversarial Attacks [article]

Qi Liu, Tao Liu, Zihao Liu, Yanzhi Wang, Yier Jin, Wujie Wen
2018 arXiv   pre-print
In this work, we for the first time investigate the multi-factor adversarial attack problem in practical model optimized deep learning systems by jointly considering the DNN model-reshaping (e.g.  ...  A defense technique named "gradient inhibition" is further developed to ease the generating of adversarial examples thus to effectively mitigate adversarial attacks towards both software and hardware-oriented  ...  Adversarial Attack Design To exert effective adversarial attacks to practical deep learning systems, our first step is to extend the single-factor adversarial examples generating algorithm to the multi-factor  ... 
arXiv:1802.05193v2 fatcat:dp3n32efgzfcpeyllv5rt3ot5q

Exploring the role of Input and Output Layers of a Deep Neural Network in Adversarial Defense [article]

Jay N. Paranjape, Rahul Kumar Dubey, Vijendran V Gopalan
2020 arXiv   pre-print
Deep neural networks are learning models having achieved state of the art performance in many fields like prediction, computer vision, language processing and so on.  ...  adversarial attacks.  ...  ACKNOWLEDGEMENTS The authors gratefully acknowledge Robert Bosch for the opportunity to intern and the Indian Institute of Technology Delhi for their permission.  ... 
arXiv:2006.01408v1 fatcat:einvwso3tjhkfjvyou2vnbfv7y

Whatever Does Not Kill Deep Reinforcement Learning, Makes It Stronger [article]

Vahid Behzadan, Arslan Munir
2017 arXiv   pre-print
Recent developments have established the vulnerability of deep Reinforcement Learning (RL) to policy manipulation attacks via adversarial perturbations.  ...  Our results also show that policies learned under adversarial perturbations are more robust to test-time attacks.  ...  Acknowledgments The authors would like to thank Dr. William Hsu for his comments and suggestions on this work.  ... 
arXiv:1712.09344v1 fatcat:rzi4py4p4fejvp6oiiiaxjc5ga
« Previous Showing results 1 — 15 out of 14,465 results