Filters








28,258 Hits in 2.2 sec

Benchmarking Adversarial Robustness [article]

Yinpeng Dong, Qi-An Fu, Xiao Yang, Tianyu Pang, Hang Su, Zihao Xiao, Jun Zhu
2019 arXiv   pre-print
In this paper, we establish a comprehensive, rigorous, and coherent benchmark to evaluate adversarial robustness on image classification tasks.  ...  Deep neural networks are vulnerable to adversarial examples, which becomes one of the most important research problems in the development of deep learning.  ...  Conclusion In this paper, we established a comprehensive, rigorous, and coherent benchmark to evaluate adversarial robustness of image classifiers.  ... 
arXiv:1912.11852v1 fatcat:aamzg5ajlnb27brph52rmd4era

CARBEN: Composite Adversarial Robustness Benchmark [article]

Lei Hsiung, Yun-Yun Tsai, Pin-Yu Chen, Tsung-Yi Ho
2022 arXiv   pre-print
A leaderboard to benchmark adversarial robustness against CAA is also introduced.  ...  One such approach, composite adversarial attack (CAA), not only expands the perturbable space of the image, but also may be overlooked by current modes of robustness evaluation.  ...  demo, CARBEN (composite adversarial robustness benchmark).  ... 
arXiv:2207.07797v1 fatcat:tcbbkumj6zeeli6hwkyqdq5pda

RobustBench: a standardized adversarial robustness benchmark [article]

Francesco Croce, Maksym Andriushchenko, Vikash Sehwag, Edoardo Debenedetti, Nicolas Flammarion, Mung Chiang, Prateek Mittal, Matthias Hein
2021 arXiv   pre-print
Our goal is to establish a standardized benchmark of adversarial robustness, which as accurately as possible reflects the robustness of the considered models within a reasonable computational budget.  ...  A key challenge in benchmarking robustness is that its evaluation is often error-prone leading to robustness overestimation.  ...  We also thank Chong Xiang for the helpful feedback on the benchmark, Eric Wong for the advice regarding the name of 10 the benchmark, and Evan Shelhamer for the helpful discussion on test-time defenses  ... 
arXiv:2010.09670v3 fatcat:bfekmer6xjet5a6lljep5fg3aq

Graph Robustness Benchmark: Benchmarking the Adversarial Robustness of Graph Machine Learning [article]

Qinkai Zheng, Xu Zou, Yuxiao Dong, Yukuo Cen, Da Yin, Jiarong Xu, Yang Yang, Jie Tang
2021 arXiv   pre-print
To bridge this gap, we present the Graph Robustness Benchmark (GRB) with the goal of providing a scalable, unified, modular, and reproducible evaluation for the adversarial robustness of GML models.  ...  Adversarial attacks on graphs have posed a major threat to the robustness of graph machine learning (GML) models. Naturally, there is an ever-escalating arms race between attackers and defenders.  ...  In this paper, we propose the Graph Robustness Benchmark (GRB)-the first attempt to benchmark the adversarial robustness of GML models.  ... 
arXiv:2111.04314v1 fatcat:2vvox2265fgdxl4rpcbzncirum

Benchmarking Adversarial Robustness on Image Classification

Yinpeng Dong, Qi-An Fu, Xiao Yang, Tianyu Pang, Hang Su, Zihao Xiao, Jun Zhu
2020 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)  
In this paper, we establish a comprehensive, rigorous, and coherent benchmark to evaluate adversarial robustness on image classification tasks.  ...  configurations, thus it is encouraged to adopt the robustness curves to evaluate adversarial robustness; 2) As one of the most effective defense techniques, adversarial training can generalize across  ...  General 52.4 Table 1 : We show the defense models that are incorporated into our benchmark for adversarial robustness evaluation.  ... 
doi:10.1109/cvpr42600.2020.00040 dblp:conf/cvpr/DongFYPSXZ20 fatcat:bglxvcjgy5hlfmecqgylz2tabi

RobFR: Benchmarking Adversarial Robustness on Face Recognition [article]

Xiao Yang, Dingcheng Yang, Yinpeng Dong, Hang Su, Wenjian Yu, Jun Zhu
2021 arXiv   pre-print
Face recognition (FR) has recently made substantial progress and achieved high accuracy on standard benchmarks.  ...  To facilitate a better understanding of the adversarial vulnerability on FR, we develop an adversarial robustness evaluation library on FR named RobFR, which serves as a reference for evaluating the robustness  ...  Benchmarking adversarial robustness on image classification.  ... 
arXiv:2007.04118v2 fatcat:mgr2g2fmjreqjnjwwivs2kdybi

x-Vectors Meet Adversarial Attacks: Benchmarking Adversarial Robustness in Speaker Verification

Jesús Villalba, Yuekai Zhang, Najim Dehak
2020 Interspeech 2020  
We also discuss the methodology and metrics to benchmark adversarial attacks and defenses in ASV.  ...  In this work, we investigate the vulnerability of state-ofthe-art ASV systems to adversarial attacks.  ...  Benchmarking adversarial robustness Here, we discuss how to evaluate ASV robustness to adversarial attacks. Existing works compare accuracy [24] or EER [25] w.r.t. the FGSM parameter.  ... 
doi:10.21437/interspeech.2020-2458 dblp:conf/interspeech/VillalbaZD20 fatcat:vnyyhbxhhzcmdaqy2spoin3i5q

Adversarial VQA: A New Benchmark for Evaluating the Robustness of VQA Models [article]

Linjie Li, Jie Lei, Zhe Gan, Jingjing Liu
2021 arXiv   pre-print
We hope our Adversarial VQA dataset can shed new light on robustness study in the community and serve as a valuable benchmark for future work.  ...  (iii) When used for data augmentation, our dataset can effectively boost model performance on other robust VQA benchmarks.  ...  Comparison with Other Datasets Our Adversarial VQA dataset sets a new benchmark for evaluating the robustness of VQA models. It improves upon existing robust VQA benchmarks in several ways.  ... 
arXiv:2106.00245v2 fatcat:tpel4uqjr5dv3bwwrgmdhmaaqm

Is RobustBench/AutoAttack a suitable Benchmark for Adversarial Robustness? [article]

Peter Lorenz, Dominik Strassel, Margret Keuper, Janis Keuper
2021 arXiv   pre-print
Recently, RobustBench (Croce et al. 2020) has become a widely recognized benchmark for the adversarial robustness of image classification networks.  ...  In its most commonly reported sub-task, RobustBench evaluates and ranks the adversarial robustness of trained neural networks on CIFAR10 under AutoAttack (Croce and Hein 2020b) with l-inf perturbations  ...  In 2020, (Croce et al. 2020 ) launched a benchmark website 1 with the goal to provide a standardized benchmark for adversarial robustness on image classification models.  ... 
arXiv:2112.01601v1 fatcat:grhsritagnf5binmqqjth55keq

Adversarial GLUE: A Multi-Task Benchmark for Robustness Evaluation of Language Models [article]

Boxin Wang, Chejian Xu, Shuohang Wang, Zhe Gan, Yu Cheng, Jianfeng Gao, Ahmed Hassan Awadallah, Bo Li
2022 arXiv   pre-print
While several individual datasets have been proposed to evaluate model robustness, a principled and comprehensive benchmark is still missing.  ...  However, recent studies reveal that the robustness of these models can be challenged by carefully crafted textual adversarial examples.  ...  In this paper, we introduce Adversarial GLUE (AdvGLUE), a multi-task benchmark for robustness evaluation of language models.  ... 
arXiv:2111.02840v2 fatcat:3wzxt4cdrjettcmsajvp52skoe

Towards Robust Rain Removal Against Adversarial Attacks: A Comprehensive Benchmark Analysis and Beyond [article]

Yi Yu, Wenhan Yang, Yap-Peng Tan, Alex C. Kot
2022 arXiv   pre-print
A systematic evaluation of key modules in existing methods is performed in terms of their robustness against adversarial attacks.  ...  This paper makes the first attempt to conduct a comprehensive study on the robustness of deep learning-based rain removal methods against adversarial attacks.  ...  Benchmarking Adversarial Robustness of Deraining Models Attack Framework Adversarial attacks aim to deteriorate the output of the deraining methods by adding a small amount of visually unperceivable  ... 
arXiv:2203.16931v1 fatcat:pn5o6bldlfc3pbplnay2q6avfm

ImageNet-Patch: A Dataset for Benchmarking Machine Learning Robustness against Adversarial Patches [article]

Maura Pintor, Daniele Angioni, Angelo Sotgiu, Luca Demetrio, Ambra Demontis, Battista Biggio, Fabio Roli
2022 arXiv   pre-print
To overcome these issues, we propose ImageNet-Patch, a dataset to benchmark machine-learning models against adversarial patches.  ...  We conclude by discussing how our dataset could be used as a benchmark for robustness, and how our methodology can be generalized to other domains.  ...  Benchmark Datasets for Robustness Evaluations Previous work proposed datasets for benchmarking adversarial robustness. The APRICOT dataset, proposed by Braunegg et al.  ... 
arXiv:2203.04412v1 fatcat:soouclmvbvd6xlzevzxpnbf4mi

Detection of Word Adversarial Examples in Text Classification: Benchmark and Baseline via Robust Density Estimation [article]

KiYoon Yoo, Jangho Kim, Jiho Jang, Nojun Kwak
2022 arXiv   pre-print
a robust defense system.  ...  As a countermeasure, adversarial defense has been explored, but relatively few efforts have been made to detect adversarial examples.  ...  Conclusion We propose a general method and benchmark for adversarial example detection in NLP.  ... 
arXiv:2203.01677v1 fatcat:lsucjmrslrchjayngrfahkydye

Certified Adversarial Defenses Meet Out-of-Distribution Corruptions: Benchmarking Robustness and Simple Baselines [article]

Jiachen Sun, Akshay Mehra, Bhavya Kailkhura, Pin-Yu Chen, Dan Hendrycks, Jihun Hamm, Z. Morley Mao
2021 arXiv   pre-print
In this work, we critically examine how the adversarial robustness guarantees from randomized smoothing-based certification methods change when state-of-the-art certifiably robust models encounter out-of-distribution  ...  We find that FourierMix augmentations help eliminate the spectral bias of certifiably robust models enabling them to achieve significantly better robustness guarantees on a range of OOD benchmarks.  ...  We also presented a benchmarking suite to gain a comprehensive understanding of the model's OOD robustness.  ... 
arXiv:2112.00659v1 fatcat:rtdt6q6pojeb3jhh3zduhgq4by

Foolbox Native: Fast adversarial attacks to benchmark the robustness of machine learning models in PyTorch, TensorFlow, and JAX

Jonas Rauber, Roland Zimmermann, Matthias Bethge, Wieland Brendel
2020 Journal of Open Source Software  
Foolbox is a popular Python library to benchmark the robustness of machine learning models against these adversarial perturbations.  ...  Foolbox Native is the first adversarial robustness toolbox that is both fast and frameworkagnostic.  ... 
doi:10.21105/joss.02607 fatcat:tpry6yjpife3jph6xpwclu7iwq
« Previous Showing results 1 — 15 out of 28,258 results