6,397 Hits in 5.8 sec

Towards Robust Neural Networks via Orthogonal Diversity [article]

Kun Fang, Qinghua Tao, Yingwen Wu, Tao Li, Jia Cai, Feipeng Cai, Xiaolin Huang, Jie Yang
2021 arXiv   pre-print
Deep Neural Networks (DNNs) are vulnerable to invisible perturbations on the images generated by adversarial attacks, which raises researches on the adversarial robustness of DNNs.  ...  Specifically, we introduce multiple paths to augment the network, and impose orthogonality constraint on these paths.  ...  Table 1 : 1 Accuracies (%) of different vanilla-trained models on adversarial examples generated via the transferability-based black-box attack.  ... 
arXiv:2010.12190v2 fatcat:2b6rc3wetrdhdk27lucxikxaj4

Defending against Adversarial Attack towards Deep Neural Networks via Collaborative Multi-task Training [article]

Derek Wang, Chaoran Li, Sheng Wen, Surya Nepal, Yang Xiang
2020 arXiv   pre-print
For example, proactive defending methods are invalid against grey-box or white-box attacks, while reactive defending methods are challenged by low-distortion adversarial examples or transferring adversarial  ...  The defence further constructs a detector to identify and reject high-confidence adversarial examples that bypass the black-box defence.  ...  However, DNNs are vulnerable towards adversarial attacks which exploit imperceptibly perturbed examples to fool the neural networks [42] .  ... 
arXiv:1803.05123v4 fatcat:oxt36f255vb5bilfy37hmrwri4

Hardening Deep Neural Networks via Adversarial Model Cascades [article]

Deepak Vijaykeerthy, Anshuman Suri, Sameep Mehta, Ponnurangam Kumaraguru
2018 arXiv   pre-print
Our approach trains a cascade of models sequentially where each model is optimized to be robust towards a mixture of multiple attacks.  ...  Works on securing neural networks against adversarial examples achieve high empirical robustness on simple datasets such as MNIST.  ...  Empirical Results We conducted multiple experiments, proving the efficiency of our method against several attacks, both in white-box and black-box setups.  ... 
arXiv:1802.01448v4 fatcat:l5vw2er4vvgejlndfuiri5g7ru

Improving Transferability of Adversarial Examples with Input Diversity [article]

Cihang Xie, Zhishuai Zhang, Yuyin Zhou, Song Bai, Jianyu Wang, Zhou Ren, Alan Yuille
2019 arXiv   pre-print
Extensive experiments on ImageNet show that the proposed attack method can generate adversarial examples that transfer much better to different networks than existing baselines.  ...  However, most of the existing adversarial attacks only achieve relatively low success rates under the challenging black-box setting, where the attackers have no knowledge of the model structure and parameters  ...  Attacking an Ensemble of Networks Liu et al. [20] suggested that attacking an ensemble of multiple networks simultaneously can generate much stronger adversarial examples.  ... 
arXiv:1803.06978v4 fatcat:opx4txcegjeatb2b5q2jefvobm

Effects of Loss Functions And Target Representations on Adversarial Robustness [article]

Sean Saito, Sujoy Roy
2020 arXiv   pre-print
Understanding and evaluating the robustness of neural networks under adversarial settings is a subject of growing interest.  ...  We evaluate the robustness of neural networks that implement these proposed modifications using existing attacks, showing an increase in accuracy against untargeted attacks of up to 98.7\% and a decrease  ...  In this work, we employ transfer attacks, a type of black-box attack where adversarial examples are generated using a proxy model which the adversary has access to. Types.  ... 
arXiv:1812.00181v3 fatcat:g3m3qzecz5cm5on4judfalkdta

Neural Networks in Adversarial Setting and Ill-Conditioned Weight Space [article]

Mayank Singh, Abhishek Sinha, Balaji Krishnamurthy
2018 arXiv   pre-print
On the other hand, the existence of adversarial examples have raised suspicions regarding the generalization capabilities of neural networks.  ...  adversarial examples.  ...  (b), (c) ,(d) represent a sample of the corresponding adversarial images for different values generated via the black box attack.  ... 
arXiv:1801.00905v1 fatcat:sz7aeyipbfbixhxa4iaa7hfw6y

Patch-wise++ Perturbation for Adversarial Targeted Attacks [article]

Lianli Gao, Qilong Zhang, Jingkuan Song, Heng Tao Shen
2021 arXiv   pre-print
Although great progress has been made on adversarial attacks for deep neural networks (DNNs), their transferability is still unsatisfactory, especially for targeted attacks.  ...  But targeted attacks aim to push the adversarial examples into the territory of a specific class, and the amplification factor may lead to underfitting.  ...  The third is the black-box setting where the adversary generally cannot access the target model and adversarial examples are usually crafted via the substitute model.  ... 
arXiv:2012.15503v3 fatcat:62d756bpxzbwhb53lbgngqiysy

Towards Adversarial Patch Analysis and Certified Defense against Crowd Counting [article]

Qiming Wu, Zhikang Zou, Pan Zhou, Xiaoqing Ye, Binghui Wang, Ang Li
2021 arXiv   pre-print
Especially, the proposed attack leverages the extreme-density background information of input images to generate robust adversarial patches via a series of transformations (e.g., interpolation, rotation  ...  Absolute Error of RA is 5 lower than ADT on clean samples and 30 lower than ADT on adversarial examples).  ...  Black-box Attack In a black-box attack, an adversary has no access to the internal structure of victim models. However, the adversary can adopt substitute models to generate an adversarial patch.  ... 
arXiv:2104.10868v2 fatcat:w7n557tdbndbdoubykn27c3qna

Adversarial Metric Attack and Defense for Person Re-identification [article]

Song Bai, Yingwei Li, Yuyin Zhou, Qizhu Li, Philip H.S. Torr
2020 arXiv   pre-print
However, our work observes the extreme vulnerability of existing distance metrics to adversarial examples, generated by simply adding human-imperceptible perturbations to person images.  ...  Meanwhile, we also present an early attempt of training a metric-preserving network, thereby defending the metric against adversarial attacks.  ...  In contrast, attacks on the "hold-out network" correspond to black-box attacks as this network is not used to generate adversarial examples.  ... 
arXiv:1901.10650v3 fatcat:rrfndgtjcna43iwswi2rdbtyye


Pin-Yu Chen, Huan Zhang, Yash Sharma, Jinfeng Yi, Cho-Jui Hsieh
2017 Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security - AISec '17  
adversarial examples.  ...  Experimental results on MNIST, CIFAR10 and ImageNet show that the proposed ZOO attack is as effective as the state-of-the-art white-box attack and significantly outperforms existing black-box attacks via  ...  (a) (b) White-box C&W attack (c) ZOO-ADAM black-box attack (d) ZOO-Newton black-box attack Visual comparison of successful adversarial examples in CIFAR10.  ... 
doi:10.1145/3128572.3140448 dblp:conf/ccs/ChenZSYH17 fatcat:6q26yubpwbhupnvq62wxp62ija

A New Ensemble Adversarial Attack Powered by Long-term Gradient Memories [article]

Zhaohui Che and Ali Borji and Guangtao Zhai and Suiyi Ling and Jing Li and Patrick Le Callet
2019 arXiv   pre-print
Deep neural networks are vulnerable to adversarial attacks.  ...  examples via fooling multiple white-box source models in parallel. (2) Generative methods (Zhao, Dheeru, and Sameer 2018; Wei et al. 2019 ) rely on an extra generative adversarial network (GAN).  ...  Therefore, exploring adversarial attacks, especially the transferable black-box ones, is critical to demystifying the fragility of deep neural networks.  ... 
arXiv:1911.07682v1 fatcat:e4cvj3utcjfffeyt5mrgei3ete

PARL: Enhancing Diversity of Ensemble Networks to Resist Adversarial Attacks via Pairwise Adversarially Robust Loss Function [article]

Manaar Alam, Shubhajit Datta, Debdeep Mukhopadhyay, Arijit Mondal, Partha Pratim Chakrabarti
2021 arXiv   pre-print
The proposed training procedure enables PARL to achieve higher robustness against black-box transfer attacks compared to previous ensemble methods without adversely affecting the accuracy of clean examples  ...  Ensemble methods against adversarial attacks demonstrate that an adversarial example is less likely to mislead multiple classifiers in an ensemble having diverse decision boundaries.  ...  Delving into transferable adversarial examples and black-box attacks.  ... 
arXiv:2112.04948v1 fatcat:i7ab4hvgprcgvdpnowrlwmaiwa

Man-in-the-Middle Attacks against Machine Learning Classifiers via Malicious Generative Models [article]

Derui Wang, Chaoran Li, Sheng Wen, Surya Nepal, Yang Xiang
2019 arXiv   pre-print
In the past few years, many efforts have been spent on exploring query-optimisation attacks to find adversarial examples of either black-box or white-box DNN models, as well as the defending countermeasures  ...  Deep Neural Networks (DNNs) are vulnerable to deliberately crafted adversarial examples.  ...  An investigation of generating adversarial examples using VAEs might enhance the understanding towards adversarial examples.  ... 
arXiv:1910.06838v1 fatcat:ynpk2swrvzffvn7jsdysa3epee

AutoZOOM: Autoencoder-based Zeroth Order Optimization Method for Attacking Black-box Neural Networks [article]

Chun-Chen Tu, Paishun Ting, Pin-Yu Chen, Sijia Liu, Huan Zhang, Jinfeng Yi, Cho-Jui Hsieh, Shin-Ming Cheng
2020 arXiv   pre-print
Recent studies have shown that adversarial examples in state-of-the-art image classifiers trained by deep neural networks (DNN) can be easily generated when the target model is transparent to an attacker  ...  To bridge this gap, we propose a generic framework for query-efficient black-box attacks.  ...  We propose AutoZOOM, a novel query-efficient black-box attack framework for generating adversarial examples.  ... 
arXiv:1805.11770v5 fatcat:6r4ov6kcijenzkkiko6kall544

An Analysis of Adversarial Attacks and Defenses on Autonomous Driving Models [article]

Yao Deng, Xi Zheng, Tianyi Zhang, Chen Chen, Guannan Lou, Miryung Kim
2020 arXiv   pre-print
) confidential via model obfuscation, and (3) driving models with a complex architecture are preferred if computing resources permit as they are more resilient to adversarial attacks than simple models  ...  We derive several implications for system and middleware builders: (1) when adding a defense component against adversarial attacks, it is important to deploy multiple defense methods in tandem to achieve  ...  Effectiveness of Black-box Attacks To investigate the effectiveness of different adversarial attacks in the black-box setting, we first generate adversarial Prior works show that adversarial examples  ... 
arXiv:2002.02175v1 fatcat:v5xwnfd5rfglfcdrosld3zbvke
« Previous Showing results 1 — 15 out of 6,397 results