Filters








202,924 Hits in 3.6 sec

Delving into the pixels of adversarial samples [article]

Blerta Lindqvist
2021 arXiv   pre-print
Motivated by instances that we find where strong attacks do not transfer, we delve into adversarial examples at pixel level to scrutinize how adversarial attacks affect image pixel values.  ...  Despite extensive research into adversarial attacks, we do not know how adversarial attacks affect image pixels.  ...  Targeted attacks seem to be not as strong as untargeted attacks.  ... 
arXiv:2106.10996v1 fatcat:pi7zbi2genbvpcpsvdzkmbdvga

Evaluating Transfer-based Targeted Adversarial Perturbations against Real-World Computer Vision Systems based on Human Judgments [article]

Zhengyu Zhao and Nga Dang and Martha Larson
2022 arXiv   pre-print
Transfer-based adversarial images are generated on one (source) system and used to attack another (target) system.  ...  In this paper, we take the first step to investigate transfer-based targeted adversarial images in a realistic scenario where the target system is trained on some private data with its inventory of semantic  ...  Table 1 . 1 Success rates (%) of the three transfer-based attacks (CE, Po+Trip, Logit) on Google Cloud Vision. For evaluation, both the targeted and non-targeted success rates are reported.  ... 
arXiv:2206.01467v1 fatcat:vdhcc5pdk5evbi4oqrn4wdyopa

Minimizing Perceived Image Quality Loss Through Adversarial Attack Scoping [article]

Kostiantyn Khabarlak, Larysa Koriashkina
2019 arXiv   pre-print
performing efficient transfer attacks with low target inference network call count and opens a possibility of an attack using pen-only drawings on a paper for the MNIST handwritten digit dataset.  ...  In this paper we develop simplified adversarial attack algorithms based on a scoping idea, which enables execution of fast adversarial attacks that minimize structural image quality (SSIM) loss, allows  ...  based on target (transfer) network score.  ... 
arXiv:1904.10390v1 fatcat:5tkbm3sjuvdgxfmjicxhwwx2xy

SoK: A Modularized Approach to Study the Security of Automatic Speech Recognition Systems [article]

Yuxuan Chen, Jiangshan Zhang, Xuejing Yuan, Shengzhi Zhang, Kai Chen, Xiaofeng Wang, Shanqing Guo
2021 arXiv   pre-print
Particularly, our experimental study shows that transfer learning across ASR models is feasible, even in the absence of knowledge about models (even their types) and training data.  ...  Although recent studies have brought to light the weaknesses of popular ASR systems that enable out-of-band signal attack, adversarial attack, etc., and further proposed various remedies (signal smoothing  ...  to make the input adversarial towards the target model.  ... 
arXiv:2103.10651v2 fatcat:ryllxp63hvgoxm5d6ef7n7l55a

Towards Accuracy-Fairness Paradox: Adversarial Example-based Data Augmentation for Visual Debiasing [article]

Yi Zhang, Jitao Sang
2020 arXiv   pre-print
Machine learning fairness concerns about the biases towards certain protected or sensitive group of people when addressing the target tasks.  ...  Specifically, to ensure the adversarial generalization as well as cross-task transferability, we propose to couple the operations of target task classifier training, bias task classifier training, and  ...  We derive the bias of target task class t ∈ T in a classifier in terms of group difference as: bias(θ, t) =|P(t = t |b = 0, t * = t) − P(t = t |b = 1, t * = t)| (1) bias towards bias towards Figure 2  ... 
arXiv:2007.13632v2 fatcat:3zbxitm6bbbgllkvmk3lfmwfoe

An Empirical Evaluation of Adversarial Robustness under Transfer Learning [article]

Todor Davchev, Timos Korres, Stathi Fotiadis, Nick Antonopoulos, Subramanian Ramamoorthy
2019 arXiv   pre-print
Furthermore, under successful transfer, it achieves 5.2% more accuracy against white-box PGD attacks than suitable baselines.  ...  In this work, we evaluate adversarial robustness in the context of transfer learning from a source trained on CIFAR 100 to a target network trained on CIFAR 10.  ...  To this end, little has been done towards assessing adversarial robustness under the scope of TL.  ... 
arXiv:1905.02675v4 fatcat:qjp4ngezhjgg5iq5uxka6sxake

Transferable Perturbations of Deep Feature Distributions [article]

Nathan Inkawhich, Kevin J Liang, Lawrence Carin, Yiran Chen
2020 arXiv   pre-print
We achieve state-of-the-art targeted blackbox transfer-based attack results for undefended ImageNet models.  ...  We also conceptualize a transition from task/data-specific to model-specific features within a CNN architecture that directly impacts the transferability of adversarial examples.  ...  toward the target region.  ... 
arXiv:2004.12519v1 fatcat:aaz7x6jbe5duznlope46ub4ibq

Guessing Smart: Biased Sampling for Efficient Black-Box Adversarial Attacks [article]

Thomas Brunner, Frederik Diehl, Michael Truong Le, Alois Knoll
2019 arXiv   pre-print
Finally, the methods we propose have also been found to work very well against strong defenses: Our targeted attack won second place in the NeurIPS 2018 Adversarial Vision Challenge.  ...  Here, an attacker cannot access confidence scores, but only the final label. Most attacks for this scenario are either unreliable or inefficient.  ...  The Boundary Attack is an interpolation from an image of the target class towards the image under attack.  ... 
arXiv:1812.09803v3 fatcat:donpd4hxljgmdj3w4zn72mgrh4

Guessing Smart: Biased Sampling for Efficient Black-Box Adversarial Attacks

Thomas Brunner, Frederik Diehl, Michael Truong Le, Alois Knoll
2019 2019 IEEE/CVF International Conference on Computer Vision (ICCV)  
Finally, the methods we propose have also been found to work very well against strong defenses: Our targeted attack won second place in the NeurIPS 2018 Adversarial Vision Challenge.  ...  Here, an attacker cannot access confidence scores, but only the final label. Most attacks for this scenario are either unreliable or inefficient.  ...  The Boundary Attack is an interpolation from an image of the target class towards the image under attack.  ... 
doi:10.1109/iccv.2019.00506 dblp:conf/iccv/BrunnerDTK19 fatcat:m3gt5atwd5airkuozxgzkgf5x4

Skill Acquisition Via Transfer Learning and Advice Taking [chapter]

Lisa Torrey, Jude Shavlik, Trevor Walker, Richard Maclin
2006 Lecture Notes in Computer Science  
We describe a reinforcement learning system that transfers skills from a previously learned source task to a related target task.  ...  The system uses inductive logic programming to analyze experience in the source task, and transfers rules for when to take actions.  ...  For example, the passing skill in the source task above is incomplete for the target task, where passing needs to cause progress toward the goal.  ... 
doi:10.1007/11871842_41 fatcat:pf3hi2hs6jhb5ghag2epn756xq

On Success and Simplicity: A Second Look at Transferable Targeted Attacks [article]

Zhengyu Zhao, Zhuoran Liu, Martha Larson
2021 arXiv   pre-print
Achieving transferability of targeted attacks is reputed to be remarkably difficult.  ...  In our investigation, we find, however, that simple transferable attacks which require neither additional data nor model training can achieve surprisingly high targeted transferability.  ...  In this way, the gradients towards the target class become more generic and so avoid overfitting to the white-box source model.  ... 
arXiv:2012.11207v4 fatcat:z5uwaadyyrdxdefi5awskamjia

Disrupting Deepfakes: Adversarial Attacks Against Conditional Image Translation Networks and Facial Manipulation Systems [article]

Nataniel Ruiz, Sarah Adel Bargal, Stan Sclaroff
2020 arXiv   pre-print
We are first to propose and successfully apply (1) class transferable adversarial attacks that generalize to different classes, which means that the attacker does not need to have knowledge about the conditioning  ...  Some systems can also modify targeted attributes such as hair color or age. This type of manipulated images and video have been coined Deepfakes.  ...  We can thus disrupt an image towards a target or away from a target. We can generate a targeted disruption by adapting well-established adversarial attacks: FGSM, I-FGSM, and PGD.  ... 
arXiv:2003.01279v3 fatcat:wqx3k2mszfdf3gfwngthhreswm

Impact of Attention on Adversarial Robustness of Image Classification Models [article]

Prachi Agrawal, Narinder Singh Punn, Sanjay Kumar Sonbhadra, Sonali Agarwal
2021 arXiv   pre-print
against these attacks.  ...  In contrast to the datasets with less number of classes, attention based models are observed to show better robustness towards classification.  ...  Dataset Attack Source Target Source Target Source Target-I Target-II CIFAR-10 FGSM Perturbation Resnet-50 (Acc) CBAM+ Resnet-50 (Acc) Resnet-50 (Acc) CBAM+ Resnet-50 (Acc) VGG  ... 
arXiv:2109.00936v1 fatcat:6g27hqf2zfdgdctjsqqd6ju6d4

Towards Transferable Adversarial Attack against Deep Face Recognition [article]

Yaoyao Zhong, Weihong Deng
2020 arXiv   pre-print
the target system.  ...  of existing attack methods.  ...  In Section III, we first introduce the applicable strategies of transferable adversarial attacks towards deep face recognition.  ... 
arXiv:2004.05790v1 fatcat:yokl5sgyzfdtpirp72tn76sus4

Beyond ImageNet Attack: Towards Crafting Adversarial Examples for Black-box Domains [article]

Qilong Zhang, Xiaodan Li, Yuefeng Chen, Jingkuan Song, Lianli Gao, Yuan He, Hui Xue
2022 arXiv   pre-print
In this paper, with only the knowledge of the ImageNet domain, we propose a Beyond ImageNet Attack (BIA) to investigate the transferability towards black-box domains (unknown classification tasks).  ...  Currently, various works have paid great efforts to enhance the cross-model transferability, which mostly assume the substitute model is trained in the same domain as the target model.  ...  From the result, We can observe that ensemble-based training can also improve the transferability of adversarial examples towards black-box domains significantly.  ... 
arXiv:2201.11528v4 fatcat:hltin4zbenc65glymqd7fmodce
« Previous Showing results 1 — 15 out of 202,924 results