Filters








7,021 Hits in 5.6 sec

Boosting the Transferability of Adversarial Samples via Attention

Weibin Wu, Yuxin Su, Xixian Chen, Shenglin Zhao, Irwin King, Michael R. Lyu, Yu-Wing Tai
2020 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)  
Consequently, it can promote the transferability of resultant adversarial instances.  ...  It computes model attention over extracted features to regularize the search of adversarial examples, which prioritizes the corruption of critical features that are likely to be adopted by diverse architectures  ...  The work described in this paper was supported by the Research Grants Council of the Hong Kong Special Administrative Region, China (CUHK 14210717 of the General Research Fund and CUHK 2300174 of the Collaborative  ... 
doi:10.1109/cvpr42600.2020.00124 dblp:conf/cvpr/WuSCZKLT20 fatcat:w3jtcjo3xfhpppoo5ydrz3lbb4

Adversarial Pixel Restoration as a Pretext Task for Transferable Perturbations [article]

Hashmat Shadab Malik, Shahina K Kunhimon, Muzammal Naseer, Salman Khan, Fahad Shahbaz Khan
2022 arXiv   pre-print
We successfully demonstrate the adversarial transferability of our approach to Vision Transformers as well as Convolutional Neural Networks for the tasks of classification, object detection, and video  ...  Transferable adversarial attacks optimize adversaries from a pretrained surrogate model and known label space to fool the unknown black-box models.  ...  Our approach significantly shifts the attention of the model, boosting the mis-classifcation rates on the adversarial examples.  ... 
arXiv:2207.08803v1 fatcat:gjgxapptibeyvnmagguwxhxzmu

Boosting the Transferability of Video Adversarial Examples via Temporal Translation

Zhipeng Wei, Jingjing Chen, Zuxuan Wu, Yu-Gang Jiang
2022 PROCEEDINGS OF THE THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE AND THE TWENTY-EIGHTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE  
To this end, we propose to boost the transferability of video adversarial examples for black-box attacks on video recognition models.  ...  Extensive experiments on the Kinetics-400 dataset and the UCF-101 dataset demonstrate that our method can significantly boost the transferability of video adversarial examples.  ...  Acknowledgments The authors would like to thank the anonymous referees for their valuable comments and helpful suggestions.  ... 
doi:10.1609/aaai.v36i3.20168 fatcat:ftde7fud2nbirktxgsazgmwveu

Boosting the Transferability of Video Adversarial Examples via Temporal Translation [article]

Zhipeng Wei, Jingjing Chen, Zuxuan Wu, Yu-Gang Jiang
2021 arXiv   pre-print
To this end, we propose to boost the transferability of video adversarial examples for black-box attacks on video recognition models.  ...  Extensive experiments on the Kinetics-400 dataset and the UCF-101 dataset demonstrate that our method can significantly boost the transferability of video adversarial examples.  ...  Acknowledgments The authors would like to thank the anonymous referees for their valuable comments and helpful suggestions.  ... 
arXiv:2110.09075v2 fatcat:tdg7tfnmdvf2bo32bbeqce2zzi

On Improving Adversarial Transferability of Vision Transformers [article]

Muzammal Naseer, Kanchana Ranasinghe, Salman Khan, Fahad Shahbaz Khan, Fatih Porikli
2022 arXiv   pre-print
This makes it interesting to study the adversarial feature space of ViT models and their transferability.  ...  In particular, we observe that adversarial patterns found via conventional adversarial attacks show very low black-box transferability even for large ViT models.  ...  In appendix A, we demonstrate the effectiveness of our selfensemble to boost adversarial transferability within ensemble of different models.  ... 
arXiv:2106.04169v3 fatcat:y76kpnjwunhy3bfpfv5365abnm

Towards Transferable Adversarial Attacks on Vision Transformers [article]

Zhipeng Wei, Jingjing Chen, Micah Goldblum, Zuxuan Wu, Tom Goldstein, Yu-Gang Jiang
2022 arXiv   pre-print
More specifically, we introduce a dual attack framework, which contains a Pay No Attention (PNA) attack and a PatchOut attack, to improve the transferability of adversarial samples across different ViTs  ...  We show that skipping the gradients of attention during backpropagation can generate adversarial examples with high transferability.  ...  One related work (Naseer et al. 2021 ) proposes the Self-Ensemble (SE) method, which boosts adversarial transferability by optimizing perturbations on an ensemble of models.  ... 
arXiv:2109.04176v3 fatcat:pme722wdh5f2tk77dj2qf54ktq

Deep visual unsupervised domain adaptation for classification tasks: a survey

Yeganeh Madadi, Vahid Seydi, Kamal Nasrollahi, Reshad Hosseini, Thomas B. Moeslund
2020 IET Image Processing  
five groups of discrepancy-, adversarial-, reconstruction-, representation-, and attention-based methods.  ...  Fig. 7 t-SNE [178] embeddings of 1000 test samples from SVHN (source, red) and MNIST (target, blue) (a) MMD metric, (b) TarGAN method [69]  ...  Regarding the taxonomy of attention-based methods, we categorise this group into three subgroups: adversarial attention alignment, transferable local attention, and transferable global attention.  ... 
doi:10.1049/iet-ipr.2020.0087 fatcat:x7v5et3r6nagpe2ivuu5nd4qku

Measuring the Transferability of ℓ_∞ Attacks by the ℓ_2 Norm [article]

Sizhe Chen, Qinghua Tao, Zhixing Ye, Xiaolin Huang
2022 arXiv   pre-print
Deep neural networks could be fooled by adversarial examples with trivial differences to original samples.  ...  To keep the difference imperceptible in human eyes, researchers bound the adversarial perturbations by the ℓ_∞ norm, which is now commonly served as the standard to align the strength of different attacks  ...  of the 2 norm on the transferability via experiments on 7 transfer-based attacks, 4 surrogates, and 9 victims.  ... 
arXiv:2102.10343v3 fatcat:tufkklbbmveejikif46mtfq2by

Knowledge Squeezed Adversarial Network Compression [article]

Shu Changyong and Li Peng and Xie Yuan and Qu Yanyun and Dai Longquan and Ma Lizhuang
2019 arXiv   pre-print
Recently, more focuses have been transferred to employ the adversarial training to minimize the discrepancy between distributions of output from two networks.  ...  Then, the transferred knowledge from teacher network could accommodate the size of student network.  ...  In transfer learning, attention transfer can facilitate the fast optimization and improve the performance of a small student network via the attention map [36] or the flow of solution procedure (FSP)  ... 
arXiv:1904.05100v2 fatcat:m6tbpwkgivf6pj4kozrkh2lzwy

Boosting 3D Adversarial Attacks with Attacking On Frequency

Binbin Liu, Jinlai Zhang, Jihong Zhu
2022 IEEE Access  
We combine the losses from point cloud and its low-frequency component to craft adversarial samples and focus on the low-frequency component of point cloud in the process of optimization.  ...  Extensive experiments validate that AOF can improve the transferability significantly compared to state-of-theart (SOTA) attacks, and is more robust to state-of-the-art 3D defense methods.  ...  The other one is transfer-based blackbox attacks [20] , [16] , which craft the adversarial samples via attacking a surrogate model they have white-box access to.  ... 
doi:10.1109/access.2022.3171659 fatcat:z4y34zvngzhvldgaci7khtgfhu

Hierarchical Knowledge Squeezed Adversarial Network Compression

Peng Li, Chang Shu, Yuan Xie, Yan Qu, Hui Kong
2020 PROCEEDINGS OF THE THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE AND THE TWENTY-EIGHTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE  
Recently, more focuses have been transferred to employ the adversarial training to minimize the discrepancy between distributions of output from two networks.  ...  adversarial training framework to learn the student network.  ...  In transfer learning, attention transfer facilitates the fast optimization and improves the performance of a small student network via the attention map (Zagoruyko and Komodakis 2016) or the flow of  ... 
doi:10.1609/aaai.v34i07.6799 fatcat:cht7mlfwrnebbivrufrc4zpp5u

Double-Win Quant: Aggressively Winning Robustness of Quantized Deep Neural Networks via Random Precision Training and Inference

Yonggan Fu, Qixuan Yu, Meng Li, Vikas Chandra, Yingyan Lin
2021 International Conference on Machine Learning  
dubbed Double-Win Quant that can boost the robustness of quantized DNNs over their full precision counterparts by a large margin.  ...  Specifically, we for the first time identify that when an adversarially trained model is quantized to different precisions in a post-training manner, the associated adversarial attacks transfer poorly  ...  Acknowledgements The work is supported by the National Science Foundation (NSF) through the MLWiNS program (Award number: 2003137) and the NeTS program (Award number: 1801865). Thanks Mr.  ... 
dblp:conf/icml/FuY0CL21 fatcat:ogeuv3jwazg23ixvjw4qzuyay4

Exploring Transferable and Robust Adversarial Perturbation Generation from the Perspective of Network Hierarchy [article]

Ruikui Wang, Yuanfang Guo, Ruijie Yang, Yunhong Wang
2021 arXiv   pre-print
and boost the robustness and transferability of the adversarial perturbations.  ...  The transferability and robustness of adversarial examples are two practical yet important properties for black-box adversarial attacks.  ...  Our contributions are summarized as below: • We propose a transferable and robust adversarial perturbation generation (TRAP) method from the perspective of network hierarchy to boost the transferability  ... 
arXiv:2108.07033v1 fatcat:d4okvsdl45gq5kp5wto3yvfaiq

Frequency Domain Model Augmentation for Adversarial Attack [article]

Yuyang Long, Qilong Zhang, Boheng Zeng, Lianli Gao, Xianglong Liu, Jian Zhang, Jingkuan Song
2022 arXiv   pre-print
Motivated by the observation that the transferability of adversarial examples can be improved by attacking diverse models simultaneously, model augmentation methods which simulate different models by using  ...  To tackle this issue, we propose a novel spectrum simulation attack to craft more transferable adversarial examples against both normally trained and defense models.  ...  We first investigate the transferability of adversarial examples crafted via a single substitute model.  ... 
arXiv:2207.05382v1 fatcat:5vh4wrj5rze45dvq424b7gq4re

Adversarial Style Mining for One-Shot Unsupervised Domain Adaptation [article]

Yawei Luo, Ping Liu, Tao Guan, Junqing Yu, Yi Yang
2020 arXiv   pre-print
The adversarial learning framework makes the style transfer module and task-specific module benefit each other during the competition.  ...  To this end, we propose a novel Adversarial Style Mining approach, which combines the style transfer module and task-specific module into an adversarial manner.  ...  Experimental results on both classification and segmentation tasks validate the effectiveness of ASM, which yields state-of-the-art performance compared with other domain adaptation approaches in the one-shot  ... 
arXiv:2004.06042v1 fatcat:zonf24qryvexzh2sjmi467tbpq
« Previous Showing results 1 — 15 out of 7,021 results