Filters








7,595 Hits in 5.1 sec

Intermediate Level Adversarial Attack for Enhanced Transferability [article]

Qian Huang, Zeqi Gu, Isay Katsman, Horace He, Pian Pawakapan, Zhiqiu Lin, Serge Belongie, Ser-Nam Lim
<span title="2018-11-20">2018</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
This leads us to introduce the Intermediate Level Attack (ILA), which attempts to fine-tune an existing adversarial example for greater black-box transferability by increasing its perturbation on a pre-specified  ...  Adversarial examples often exhibit black-box transfer, meaning that adversarial examples for one model can fool another model.  ...  • We propose a novel method, Intermediate Level Attack (ILA), that enhances black-box adversarial transferability by increasing the perturbation on a pre-specified layer of a model. • When attacking a  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1811.08458v1">arXiv:1811.08458v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/ad2t25f2mvd4fc32tpra2l3g5e">fatcat:ad2t25f2mvd4fc32tpra2l3g5e</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200825143753/https://arxiv.org/pdf/1811.08458v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/cc/62/cc62d5dc72e5e588cff29db63c352555c71d2715.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1811.08458v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Yet Another Intermediate-Level Attack [article]

Qizhang Li, Yiwen Guo, Hao Chen
<span title="2020-08-20">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
By establishing a linear mapping of the intermediate-level discrepancies (between a set of adversarial inputs and their benign counterparts) for predicting the evoked adversarial loss, we aim to take full  ...  In this paper, we propose a novel method to enhance the black-box transferability of baseline adversarial examples.  ...  Huang, Q., Katsman, I., He, H., Gu, Z., Belongie, S., Lim, S.N.: Enhancing adver- sarial example transferability with an intermediate level attack. In: ICCV (2019) 16.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2008.08847v1">arXiv:2008.08847v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/24cesir7vbf3bf7icjgoashjam">fatcat:24cesir7vbf3bf7icjgoashjam</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20201019083928/https://arxiv.org/pdf/2008.08847v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/76/9b/769bbcbeb654fa34915b7fb6a5d8b8381d4c3aa6.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2008.08847v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Enhancing Adversarial Example Transferability with an Intermediate Level Attack [article]

Qian Huang, Isay Katsman, Horace He, Zeqi Gu, Serge Belongie, Ser-Nam Lim
<span title="2020-02-27">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
We introduce the Intermediate Level Attack (ILA), which attempts to fine-tune an existing adversarial example for greater black-box transferability by increasing its perturbation on a pre-specified layer  ...  Our code is available at https://github.com/CUVL/Intermediate-Level-Attack.  ...  Bharath Hariharan for helpful discussions. This work is supported in part by a Facebook equipment donation.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1907.10823v3">arXiv:1907.10823v3</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/2skaysbh6bgmrhclalnamq44dy">fatcat:2skaysbh6bgmrhclalnamq44dy</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200322025948/https://arxiv.org/pdf/1907.10823v3.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1907.10823v3" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Query-Free Adversarial Transfer via Undertrained Surrogates [article]

Chris Miller, Soroush Vosoughi
<span title="2020-11-28">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Our results suggest that finding strong single surrogate models is a highly effective and simple method for generating transferable adversarial attacks, and that this method represents a valuable route  ...  We introduce a new method for improving the efficacy of adversarial attacks in a black-box setting by undertraining the surrogate model which the attacks are generated on.  ...  The current state of the art in query-free adversarial transfer is the Intermediate Level Attack (ILA) introduced by Huang et al. [15] .  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2007.00806v2">arXiv:2007.00806v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/66okirzm7rabva23eesijzza4u">fatcat:66okirzm7rabva23eesijzza4u</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200710103309/https://arxiv.org/pdf/2007.00806v1.pdf" title="fulltext PDF download [not primary version]" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <span style="color: #f43e3e;">&#10033;</span> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2007.00806v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

An Intermediate-level Attack Framework on The Basis of Linear Regression [article]

Yiwen Guo, Qizhang Li, Wangmeng Zuo, Hao Chen
<span title="2022-03-21">2022</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
This paper substantially extends our work published at ECCV, in which an intermediate-level attack was proposed to improve the transferability of some baseline adversarial examples.  ...  We advocate to establish a direct linear mapping from the intermediate-level discrepancies (between adversarial features and benign features) to classification prediction loss of the adversarial example  ...  We have carefully analyzed core components of the framework and shown that given powerful directional guides, the magnitude of intermediate-level feature discrepancies is correlated with the transferability  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2203.10723v1">arXiv:2203.10723v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/pr7ee4hb5rdvvofwthxqnfqg4i">fatcat:pr7ee4hb5rdvvofwthxqnfqg4i</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20220515015425/https://arxiv.org/pdf/2203.10723v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/89/90/8990907b7735d388e59f5331a51a495bfb33b5cf.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2203.10723v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Adversarial Attacks On Multi-Agent Communication [article]

James Tu, Tsunhsuan Wang, Jingkang Wang, Sivabalan Manivasagam, Mengye Ren, Raquel Urtasun
<span title="2021-10-12">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Thus, we aim to study the robustness of such systems and focus on exploring adversarial attacks in a novel multi-agent setting where communication is done through sharing learned intermediate representations  ...  Our work studies robustness at the neural network level to contribute an additional layer of fault tolerance to modern security protocols for more secure multi-agent systems.  ...  On the other hand, using an intermediate level attack projection (ILAP) [23] yields a small improvement. Overall, we find transfer attacks more challenging when at the feature level.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2101.06560v2">arXiv:2101.06560v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/7qvedvjqlffhzd6laqkugocrw4">fatcat:7qvedvjqlffhzd6laqkugocrw4</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20211016153225/https://arxiv.org/pdf/2101.06560v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/a5/ef/a5efde7d78867de6435b6a96f411ca8a9c6c3b0e.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2101.06560v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Exploring Transferable and Robust Adversarial Perturbation Generation from the Perspective of Network Hierarchy [article]

Ruikui Wang, Yuanfang Guo, Ruijie Yang, Yunhong Wang
<span title="2021-08-16">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
The transferability and robustness of adversarial examples are two practical yet important properties for black-box adversarial attacks.  ...  Therefore, we focus on the intermediate and input stages in this paper and propose a transferable and robust adversarial perturbation generation (TRAP) method.  ...  To search for perturbations with better transferability, Intermediate Level Attack (ILA) [13] maximizes the scalar projection of the adversarial example onto a guided direction on a specific hidden layer  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2108.07033v1">arXiv:2108.07033v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/d4okvsdl45gq5kp5wto3yvfaiq">fatcat:d4okvsdl45gq5kp5wto3yvfaiq</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210824185944/https://arxiv.org/pdf/2108.07033v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/b1/1e/b11eddf7e6ffa758cab624e44e4fb42ee4d614f5.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2108.07033v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Enhancing Cross-task Transferability of Adversarial Examples with Dispersion Reduction [article]

Yunhan Jia, Yantao Lu, Senem Velipasalar, Zhenyu Zhong, Tao Wei
<span title="2019-05-08">2019</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
With great efforts delved into the transferability of adversarial examples, surprisingly, less attention has been paid to its impact on real-world deep learning deployment.  ...  Neural networks are known to be vulnerable to carefully crafted adversarial examples, and these malicious samples often transfer, i.e., they maintain their effectiveness even against other models.  ...  Further, with the observation that low level features bear more similarities across CV models, we hypothesis that DR attack would produce transferable adversarial examples when targeted on intermediate  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1905.03333v1">arXiv:1905.03333v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/hzg5agpi4fae7alxjbeqpr2eki">fatcat:hzg5agpi4fae7alxjbeqpr2eki</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200825043608/https://arxiv.org/pdf/1905.03333v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/67/e3/67e3d49d3eca5ee7684cbaf3a6b9ca0420c8a634.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1905.03333v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

On Improving Adversarial Transferability of Vision Transformers [article]

Muzammal Naseer, Kanchana Ranasinghe, Salman Khan, Fahad Shahbaz Khan, Fatih Porikli
<span title="2022-03-03">2022</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
In particular, we observe that adversarial patterns found via conventional adversarial attacks show very low black-box transferability even for large ViT models.  ...  Using the compositional nature of ViT models, we enhance transferability of existing attacks by introducing two novel strategies specific to the architecture of ViT models.  ...  ., 2019) also exploit intermediate layers to enhance transferability.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2106.04169v3">arXiv:2106.04169v3</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/y76kpnjwunhy3bfpfv5365abnm">fatcat:y76kpnjwunhy3bfpfv5365abnm</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20220307192238/https://arxiv.org/pdf/2106.04169v3.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/09/18/0918125daacb6c2b3a2d3f155ad095d5ae8fb9b9.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2106.04169v3" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Improving Adversarial Transferability via Neuron Attribution-Based Attacks [article]

Jianping Zhang, Weibin Wu, Jen-tse Huang, Yizhan Huang, Wenxuan Wang, Yuxin Su, Michael R. Lyu
<span title="2022-03-31">2022</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Due to the transferability of features, feature-level attacks have shown promise in synthesizing more transferable adversarial samples.  ...  To efficiently tackle the black-box setting where the target model's particulars are unknown, feature-level transfer-based attacks propose to contaminate the intermediate feature outputs of local models  ...  Intermediate Level Attack (ILA) [13] fine-tunes existing adversarial examples by increasing the perturbation on a target layer from the source model to further enhance the transferability.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2204.00008v1">arXiv:2204.00008v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/6x7inpacbzd5zkzuyrcqo5haam">fatcat:6x7inpacbzd5zkzuyrcqo5haam</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20220614095243/https://arxiv.org/pdf/2204.00008v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/41/1b/411b07870690a9492aec0331e07ede019f3d6814.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2204.00008v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Feature Importance-aware Transferable Adversarial Attacks [article]

Zhibo Wang, Hengchang Guo, Zhifei Zhang, Wenxin Liu, Zhan Qin, Kui Ren
<span title="2022-02-24">2022</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Transferability of adversarial examples is of central importance for attacking an unknown model, which facilitates adversarial attacks in more practical scenarios, e.g., black-box attacks.  ...  Finally, the feature importance guides to search for adversarial examples towards disrupting critical features, achieving stronger transferability.  ...  National Natural Science of China (Grants No. 62122066, U20A20182, 61872274, U20A20178, 62032021, and 62072395), National Key R&D Program of China (Grant No. 2020AAA0107705), the Fundamental Research Funds for  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2107.14185v3">arXiv:2107.14185v3</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/ztj5fftupbgb3kn5unkordl544">fatcat:ztj5fftupbgb3kn5unkordl544</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20220516093525/https://arxiv.org/pdf/2107.14185v3.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/b1/85/b18580c6746a47603955074931d4687e268781ea.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2107.14185v3" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Towards Transferable Adversarial Attack against Deep Face Recognition [article]

Yaoyao Zhong, Weihong Deng
<span title="2020-04-13">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
In this work, we first investigate the characteristics of transferable adversarial attacks in face recognition by showing the superiority of feature-level methods over label-level methods.  ...  Then, to further improve transferability of feature-level adversarial examples, we propose DFANet, a dropout-based method used in convolutional layers, which could increase the diversity of surrogate models  ...  Experiments for the Baseline Method To explore adversarial attacks against deep face models and find a baseline method for further study on transferability enhancement methods, we compare the attack performance  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2004.05790v1">arXiv:2004.05790v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/yokl5sgyzfdtpirp72tn76sus4">fatcat:yokl5sgyzfdtpirp72tn76sus4</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200907052450/https://arxiv.org/pdf/2004.05790v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/a5/fd/a5fd689c0890fa15fcdc3d97e805d38d342ed524.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2004.05790v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Beyond ImageNet Attack: Towards Crafting Adversarial Examples for Black-box Domains [article]

Qilong Zhang, Xiaodan Li, Yuefeng Chen, Jingkuan Song, Lianli Gao, Yuan He, Hui Xue
<span title="2022-03-14">2022</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Specifically, we leverage a generative model to learn the adversarial function for disrupting low-level features of input images.  ...  Adversarial examples have posed a severe threat to deep neural networks due to their transferable nature.  ...  Hence, as a baseline for the new black-domain attack problem, our Beyond ImageNet Attack (BIA) turns to destroy the low-level features of the substitute model at a specific layer L to generate transferable  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2201.11528v4">arXiv:2201.11528v4</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/hltin4zbenc65glymqd7fmodce">fatcat:hltin4zbenc65glymqd7fmodce</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20220316021117/https://arxiv.org/pdf/2201.11528v4.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/95/6d/956d22ab5f6e450f56c545e1f817bc5f3c78d439.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2201.11528v4" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

All You Need is RAW: Defending Against Adversarial Attacks with Camera Image Pipelines [article]

Yuxuan Zhang, Bo Dong, Felix Heide
<span title="2022-03-18">2022</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Existing neural networks for computer vision tasks are vulnerable to adversarial attacks: adding imperceptible perturbations to the input images can fool these methods to make a false prediction on an  ...  In this work, we exploit this RAW data distribution as an empirical prior for adversarial defense.  ...  Based on the access level to target networks, adversarial attacks can be broadly divided into white-box attacks and black-box attacks.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2112.09219v2">arXiv:2112.09219v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/vjh5n7lzxrcvfcsl3ouparkirm">fatcat:vjh5n7lzxrcvfcsl3ouparkirm</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20220516054724/https://arxiv.org/pdf/2112.09219v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/24/5f/245ffa8cc0f42c132acde2c28295d67dcdc9a207.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2112.09219v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

You See What I Want You to See: Exploring Targeted Black-Box Transferability Attack for Hash-based Image Retrieval Systems

Yanru Xiao, Cong Wang
<span title="">2021</span> <i title="IEEE"> 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) </i> &nbsp;
In this paper, we start from an adversarial standpoint to explore and enhance the capacity of targeted black-box transferability attack for deep hashing.  ...  Then we develop a new attack that is simultaneously adversarial and robust to noise to enhance transferability.  ...  Our work taps into this line to enhance black-box transferability for image retrieval systems and will compare with these techniques in Sec. 6. Query-based Attack.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/cvpr46437.2021.00197">doi:10.1109/cvpr46437.2021.00197</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/ock3f256ujdu3dpcyry5dtr5h4">fatcat:ock3f256ujdu3dpcyry5dtr5h4</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20211012183005/https://openaccess.thecvf.com/content/CVPR2021/papers/Xiao_You_See_What_I_Want_You_To_See_Exploring_Targeted_CVPR_2021_paper.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/aa/be/aabe743b8b05f3d1a91d2421779d75f010649e72.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/cvpr46437.2021.00197"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> ieee.com </button> </a>
&laquo; Previous Showing results 1 &mdash; 15 out of 7,595 results