7,164 Hits in 3.4 sec

Enhancing Adversarial Example Transferability with an Intermediate Level Attack [article]

Qian Huang, Isay Katsman, Horace He, Zeqi Gu, Serge Belongie, Ser-Nam Lim
2020 arXiv   pre-print
We introduce the Intermediate Level Attack (ILA), which attempts to fine-tune an existing adversarial example for greater black-box transferability by increasing its perturbation on a pre-specified layer  ...  Our code is available at  ...  an adversarial example x generated by attack method A for natural image x, we wish to enhance its transferability by focusing on a layer l of a given network F .  ... 
arXiv:1907.10823v3 fatcat:2skaysbh6bgmrhclalnamq44dy

Yet Another Intermediate-Level Attack [article]

Qizhang Li, Yiwen Guo, Hao Chen
2020 arXiv   pre-print
In this paper, we propose a novel method to enhance the black-box transferability of baseline adversarial examples.  ...  The transferability of adversarial examples across deep neural network (DNN) models is the crux of a spectrum of black-box attacks.  ...  Huang, Q., Katsman, I., He, H., Gu, Z., Belongie, S., Lim, S.N.: Enhancing adver- sarial example transferability with an intermediate level attack. In: ICCV (2019) 16.  ... 
arXiv:2008.08847v1 fatcat:24cesir7vbf3bf7icjgoashjam

Intermediate Level Adversarial Attack for Enhanced Transferability [article]

Qian Huang, Zeqi Gu, Isay Katsman, Horace He, Pian Pawakapan, Zhiqiu Lin, Serge Belongie, Ser-Nam Lim
2018 arXiv   pre-print
This leads us to introduce the Intermediate Level Attack (ILA), which attempts to fine-tune an existing adversarial example for greater black-box transferability by increasing its perturbation on a pre-specified  ...  Adversarial examples often exhibit black-box transfer, meaning that adversarial examples for one model can fool another model.  ...  Our attack, on the other hand, focuses on enhancing transferability of the adversarial examples by perturbing intermediate layers.  ... 
arXiv:1811.08458v1 fatcat:ad2t25f2mvd4fc32tpra2l3g5e

Query-Free Adversarial Transfer via Undertrained Surrogates [article]

Chris Miller, Soroush Vosoughi
2020 arXiv   pre-print
Deep neural networks are vulnerable to adversarial examples -- minor perturbations added to a model's input which cause the model to output an incorrect prediction.  ...  Our results suggest that finding strong single surrogate models is a highly effective and simple method for generating transferable adversarial attacks, and that this method represents a valuable route  ...  We refer to an ILA attack which enhances an example produced by FGSM as ILA-enhanced FGSM, and follow the same convention for other ILA attacks.  ... 
arXiv:2007.00806v2 fatcat:66okirzm7rabva23eesijzza4u

An Intermediate-level Attack Framework on The Basis of Linear Regression [article]

Yiwen Guo, Qizhang Li, Wangmeng Zuo, Hao Chen
2022 arXiv   pre-print
This paper substantially extends our work published at ECCV, in which an intermediate-level attack was proposed to improve the transferability of some baseline adversarial examples.  ...  with adversarial transferability, 3) further boost of the performance can be achieved by performing multiple runs of the baseline attack with random initialization.  ...  We have carefully analyzed core components of the framework and shown that given powerful directional guides, the magnitude of intermediate-level feature discrepancies is correlated with the transferability  ... 
arXiv:2203.10723v1 fatcat:pr7ee4hb5rdvvofwthxqnfqg4i

Exploring Transferable and Robust Adversarial Perturbation Generation from the Perspective of Network Hierarchy [article]

Ruikui Wang, Yuanfang Guo, Ruijie Yang, Yunhong Wang
2021 arXiv   pre-print
The transferability and robustness of adversarial examples are two practical yet important properties for black-box adversarial attacks.  ...  Therefore, we focus on the intermediate and input stages in this paper and propose a transferable and robust adversarial perturbation generation (TRAP) method.  ...  To search for perturbations with better transferability, Intermediate Level Attack (ILA) [13] maximizes the scalar projection of the adversarial example onto a guided direction on a specific hidden layer  ... 
arXiv:2108.07033v1 fatcat:d4okvsdl45gq5kp5wto3yvfaiq

Enhancing Cross-task Transferability of Adversarial Examples with Dispersion Reduction [article]

Yunhan Jia, Yantao Lu, Senem Velipasalar, Zhenyu Zhong, Tao Wei
2019 arXiv   pre-print
With great efforts delved into the transferability of adversarial examples, surprisingly, less attention has been paid to its impact on real-world deep learning deployment.  ...  Neural networks are known to be vulnerable to carefully crafted adversarial examples, and these malicious samples often transfer, i.e., they maintain their effectiveness even against other models.  ...  Further, with the observation that low level features bear more similarities across CV models, we hypothesis that DR attack would produce transferable adversarial examples when targeted on intermediate  ... 
arXiv:1905.03333v1 fatcat:hzg5agpi4fae7alxjbeqpr2eki

Adversarial Attacks On Multi-Agent Communication [article]

James Tu, Tsunhsuan Wang, Jingkang Wang, Sivabalan Manivasagam, Mengye Ren, Raquel Urtasun
2021 arXiv   pre-print
Thus, we aim to study the robustness of such systems and focus on exploring adversarial attacks in a novel multi-agent setting where communication is done through sharing learned intermediate representations  ...  with domain adaptation.  ...  On the other hand, using an intermediate level attack projection (ILAP) [23] yields a small improvement. Overall, we find transfer attacks more challenging when at the feature level.  ... 
arXiv:2101.06560v2 fatcat:7qvedvjqlffhzd6laqkugocrw4

On Improving Adversarial Transferability of Vision Transformers [article]

Muzammal Naseer, Kanchana Ranasinghe, Salman Khan, Fahad Shahbaz Khan, Fatih Porikli
2022 arXiv   pre-print
Formulating an attack using only the last class token (conventional approach) does not directly leverage the discriminative information stored in the earlier tokens, leading to poor adversarial transferability  ...  In particular, we observe that adversarial patterns found via conventional adversarial attacks show very low black-box transferability even for large ViT models.  ...  ENHANCING ADVERSARIAL TRANSFERABILITY OF VITS Preliminaries: Given a clean input image sample x with a label y, a source ViT model F and a target model M which is under-attack, the goal of an adversarial  ... 
arXiv:2106.04169v3 fatcat:y76kpnjwunhy3bfpfv5365abnm

Feature Importance-aware Transferable Adversarial Attacks [article]

Zhibo Wang, Hengchang Guo, Zhifei Zhang, Wenxin Liu, Zhan Qin, Kui Ren
2022 arXiv   pre-print
Transferability of adversarial examples is of central importance for attacking an unknown model, which facilitates adversarial attacks in more practical scenarios, e.g., black-box attacks.  ...  Existing transferable attacks tend to craft adversarial examples by indiscriminately distorting features to degrade prediction accuracy in a source model without aware of intrinsic features of objects  ...  Recently, [36, 7, 23] performed attacks in the intermediate layers directly to enhance transferability.  ... 
arXiv:2107.14185v3 fatcat:ztj5fftupbgb3kn5unkordl544

Towards Transferable Adversarial Attack against Deep Face Recognition [article]

Yaoyao Zhong, Weihong Deng
2020 arXiv   pre-print
In this work, we first investigate the characteristics of transferable adversarial attacks in face recognition by showing the superiority of feature-level methods over label-level methods.  ...  Extensive experiments on state-of-the-art face models with various training databases, loss functions and network architectures show that the proposed method can significantly enhance the transferability  ...  Discussion To achieve more intuitive understanding of the transferable adversarial attacks and the proposed DFANet, we next interpret the intermediate generation process of adversarial example.  ... 
arXiv:2004.05790v1 fatcat:yokl5sgyzfdtpirp72tn76sus4

Beyond ImageNet Attack: Towards Crafting Adversarial Examples for Black-box Domains [article]

Qilong Zhang, Xiaodan Li, Yuefeng Chen, Jingkuan Song, Lianli Gao, Yuan He, Hui Xue
2022 arXiv   pre-print
Adversarial examples have posed a severe threat to deep neural networks due to their transferable nature.  ...  In this paper, with only the knowledge of the ImageNet domain, we propose a Beyond ImageNet Attack (BIA) to investigate the transferability towards black-box domains (unknown classification tasks).  ...  ., 2018) , attacking against an ensemble of models can yield more transferable adversarial examples.  ... 
arXiv:2201.11528v4 fatcat:hltin4zbenc65glymqd7fmodce

All You Need is RAW: Defending Against Adversarial Attacks with Camera Image Pipelines [article]

Yuxuan Zhang, Bo Dong, Felix Heide
2022 arXiv   pre-print
Existing neural networks for computer vision tasks are vulnerable to adversarial attacks: adding imperceptible perturbations to the input images can fool these methods to make a false prediction on an  ...  In this work, we exploit this RAW data distribution as an empirical prior for adversarial defense.  ...  The resulting method is entirely model-agnostic, requires no adversarial examples to train, and acts as an off-the-shelf preprocessing module that can be transferred to any task on any domain.  ... 
arXiv:2112.09219v2 fatcat:vjh5n7lzxrcvfcsl3ouparkirm

You See What I Want You to See: Exploring Targeted Black-Box Transferability Attack for Hash-based Image Retrieval Systems

Yanru Xiao, Cong Wang
2021 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)  
In this paper, we start from an adversarial standpoint to explore and enhance the capacity of targeted black-box transferability attack for deep hashing.  ...  Then we develop a new attack that is simultaneously adversarial and robust to noise to enhance transferability.  ...  required to craft an adversarial example.  ... 
doi:10.1109/cvpr46437.2021.00197 fatcat:ock3f256ujdu3dpcyry5dtr5h4

On the Transferability of Adversarial Attacksagainst Neural Text Classifier [article]

Liping Yuan, Xiaoqing Zheng, Yi Zhou, Cho-Jui Hsieh, Kai-wei Chang
2021 arXiv   pre-print
Deep neural networks are vulnerable to adversarial attacks, where a small perturbation to an input alters the model prediction.  ...  , tokenization scheme, word embedding, and model capacity, affect the transferability of adversarial examples.  ...  Enhancing ad- versarial example transferability with an intermedi- ate level attack. In Proceedings of the IEEE Interna- tional Conference on Computer Vision. Nathan Inkawhich, Kevin J.  ... 
arXiv:2011.08558v3 fatcat:rzsorqx3lffzljjuzhut2m3q6y
« Previous Showing results 1 — 15 out of 7,164 results