883 Hits in 5.5 sec

Black-box Certification and Learning under Adversarial Perturbations [article]

Hassan Ashtiani, Vinayak Pathak, Ruth Urner
2022 arXiv   pre-print
We further introduce a new setting of black-box certification under limited query budget, and analyze this for various classes of predictors and perturbation.  ...  We formally study the problem of classification under adversarial perturbations from a learner's perspective as well as a third-party who aims at certifying the robustness of a given black-box classifier  ...  Ruth Urner and Hassan Ashtiani were supported by NSERC Discovery Grants.  ... 
arXiv:2006.16520v2 fatcat:zbbxo3rimzfanoaqzok75ovpbe

Adversarial Attacks and Defenses in Deep Learning

Kui Ren, Tianhang Zheng, Zhan Qin, Xue Liu
2020 Engineering  
Hence, adversarial attack and defense techniques have attracted increasing attention from both machine learning and security communities and have become a hot research topic in recent years.  ...  With the rapid developments of artificial intelligence (AI) and deep learning (DL) techniques, it is critical to ensure the security and robustness of the deployed algorithms.  ...  However, under black-box settings, the model structure and the weights are secrets to the adversaries.  ... 
doi:10.1016/j.eng.2019.12.012 fatcat:zig3ascmqjfgboauj2276wuvcy

Denoised Smoothing: A Provable Defense for Pretrained Classifiers [article]

Hadi Salman, Mingjie Sun, Greg Yang, Ashish Kapoor, J. Zico Kolter
2020 arXiv   pre-print
Our approach applies to both the white-box and the black-box settings of the pretrained classifier.  ...  By prepending a custom-trained denoiser to any off-the-shelf image classifier and using randomized smoothing, we effectively create a new classifier that is guaranteed to be ℓ_p-robust to adversarial examples  ...  For instance, we are able to boost the certified accuracy of an ImageNet-pretrained ResNet-50 from 4% to: 31% for the black-box access setting, and 33% for the white-box access setting, under adversarial  ... 
arXiv:2003.01908v2 fatcat:i2h3ujuhjjab5azoruo2jibdce

Opportunities and Challenges in Deep Learning Adversarial Robustness: A Survey [article]

Samuel Henrique Silva, Peyman Najafirad
2020 arXiv   pre-print
We survey the most recent and important results in adversarial example generation, defense mechanisms with adversarial (re)Training as their main defense against perturbations.  ...  This paper studies strategies to implement adversary robustly trained algorithms towards guaranteeing safety in machine learning algorithms.  ...  Black-Box Attacks Under Black box restriction, models are different from the currently exposed white box, with respect to the information the attacker has access to.  ... 
arXiv:2007.00753v2 fatcat:6xjcd5kinzeevleev26jpj4mym

Adversarial Attacks and Defenses on Graphs: A Review, A Tool and Empirical Studies [article]

Wei Jin, Yaxin Li, Han Xu, Yiqi Wang, Shuiwang Ji, Charu Aggarwal, Jiliang Tang
2020 arXiv   pre-print
However, recent studies have shown that DNNs can be easily fooled by small perturbation on the input, called adversarial attacks.  ...  Thus, it is necessary and timely to provide a comprehensive overview of existing graph adversarial attacks and the countermeasures.  ...  Acknowledgments This research is supported by the National Science Foundation (NSF) under grant number CNS1815636, IIS1928278, IIS1714741, IIS1845081, IIS1907704, and IIS1955285.  ... 
arXiv:2003.00653v3 fatcat:q26p26cvezfelgjtksmi3fxrtm

Adversarial Logit Pairing [article]

Harini Kannan, Alexey Kurakin, Ian Goodfellow
2018 arXiv   pre-print
With this new accuracy drop, adversarial logit pairing ties with Tramer et al.(2018) for the state of the art on black box attacks on ImageNet.  ...  Adversarial logit pairing also successfully damages the current state of the art defense against black box attacks on ImageNet (Tramer et al., 2018), dropping its accuracy from 66.6% to 47.1%.  ...  for white box and black box attacks on ImageNet.  ... 
arXiv:1803.06373v1 fatcat:7k6sv6623fbnfmyvl33ajhpjqm

Adversarial Framework with Certified Robustness for Time-Series Domain via Statistical Features

Taha Belkhouja, Janardhan Rao Doppa
2022 The Journal of Artificial Intelligence Research  
., mobile health) and deep neural networks (DNNs) have shown great success in solving them. Despite their success, little is known about their robustness to adversarial attacks.  ...  Optimized polynomial transformations are used to create attacks that are more effective (in terms of successfully fooling DNNs) than those based on additive perturbations.  ...  of Food and Agriculture award #2021-67021-35344.  ... 
doi:10.1613/jair.1.13543 fatcat:wkeqnwcgsvfxpd6qwunzczhelm

A survey in Adversarial Defences and Robustness in NLP [article]

Shreya Goyal, Sumanth Doddapaneni, Mitesh M.Khapra, Balaraman Ravindran
2022 arXiv   pre-print
In recent years, it has been seen that deep neural networks are lacking robustness and are likely to break in case of adversarial perturbations in input data.  ...  Strong adversarial attacks are proposed by various authors for computer vision and Natural Language Processing (NLP).  ...  On the basis of access to model's parameters the adversarial attacks are classified as white box and black box attacks.  ... 
arXiv:2203.06414v2 fatcat:2ukd44px35e7ppskzkaprfw4ha

Certified Adversarial Robustness for Deep Reinforcement Learning [article]

Björn Lütjens, Michael Everett, Jonathan P. How
2020 arXiv   pre-print
The proposed defense computes guaranteed lower bounds on state-action values during execution to identify and choose the optimal action under a worst-case deviation in input space due to possible adversaries  ...  This work leverages research on certified adversarial robustness to develop an online certified defense for deep reinforcement learning algorithms.  ...  The authors greatly thank Tsui-Wei (Lily) Weng for providing code for the Fast-Lin algorithm and insightful discussions.  ... 
arXiv:1910.12908v3 fatcat:xoafb7x6hbdhlgnm7yfkmwbeqm

Game Theory for Adversarial Attacks and Defenses [article]

Shorya Sharma
2022 arXiv   pre-print
Adversarial attacks can generate adversarial inputs by applying small but intentionally worst-case perturbations to samples from the dataset, which leads to even state-of-the-art deep neural networks outputting  ...  Hence, some adversarial defense techniques are developed to improve the security and robustness of the models and avoid them being attacked.  ...  Using PGD to train [4] a robust network adversarially can improve the robustness of CNNs and ResNets [12] against several typical first-order attacks under both black-box and white-box settings.  ... 
arXiv:2110.06166v3 fatcat:547yungdhvd3tpmxwbib47mnve

Randomized Smoothing under Attack: How Good is it in Pratice? [article]

Thibault Maho, Teddy Furon, Erwan Le Merrer
2022 arXiv   pre-print
This paper questions the effectiveness of randomized smoothing as a defense, against state of the art black-box attacks.  ...  We first formally highlight the mismatch between a theoretical certification and the practice of attacks on classifiers. We then perform attacks on randomized smoothing as a defense.  ...  Jeopardizing black box attacks Black box attacks usually make two assumptions.  ... 
arXiv:2204.14187v1 fatcat:ucizs4k4enhrvf6g6fulyueqoy

Adversarial Attacks and Defenses in Images, Graphs and Text: A Review [article]

Han Xu, Yao Ma, Haochen Liu, Debayan Deb, Hui Liu, Jiliang Tang, Anil K. Jain
2019 arXiv   pre-print
However, the existence of adversarial examples has raised concerns about applying deep learning to safety-critical applications.  ...  In this survey, we review the state of the art algorithms for generating adversarial examples and the countermeasures against adversarial examples, for the three popular data types, i.e., images, graphs  ...  Acknowledgements This work is supported by the National Science Foundation (NSF) under grant numbers IIS-1845081 and CNS-1815636.  ... 
arXiv:1909.08072v2 fatcat:i3han24f3fdgpop45t4pmxcdtm

Smoothed Inference for Adversarially-Trained Models [article]

Yaniv Nemcovsky, Evgenii Zheltonozhskii, Chaim Baskin, Brian Chmiel, Maxim Fishman, Alex M. Bronstein, Avi Mendelson
2020 arXiv   pre-print
We examine its performance on common white-box (PGD) and black-box (transfer and NAttack) attacks on CIFAR-10 and CIFAR-100, substantially outperforming previous art for most scenarios and comparable on  ...  ., adversarial training.  ...  Acknowledgments The research was funded by the Hyundai Motor Company through the HYUNDAI-TECHNION-KAIST Consortium, National Cyber Security Authority, and the Hiroshi Fujiwara Technion Cyber Security Research  ... 
arXiv:1911.07198v2 fatcat:chvjgtc4a5bhllbepypdqh6mse

Advances in adversarial attacks and defenses in computer vision: A survey [article]

Naveed Akhtar, Ajmal Mian, Navid Kardan, Mubarak Shah
2021 arXiv   pre-print
However, it is now known that DL is vulnerable to adversarial attacks that can manipulate its predictions by introducing visually imperceptible perturbations in images and videos.  ...  In [2], we reviewed the contributions made by the computer vision community in adversarial attacks on deep learning (and their defenses) until the advent of year 2018.  ...  It is also shown that their attack is successful in fooling a defensively distilled network under black-box settings, where the perturbation is computed using an unsecured white-box model.  ... 
arXiv:2108.00401v2 fatcat:23gw74oj6bblnpbpeacpg3hq5y

Detection as Regression: Certified Object Detection by Median Smoothing [article]

Ping-yeh Chiang, Michael J. Curry, Ahmed Abdelkader, Aounon Kumar, John Dickerson, Tom Goldstein
2022 arXiv   pre-print
We obtain the first model-agnostic, training-free, and certified defense for object detection against ℓ_2-bounded attacks.  ...  Despite the vulnerability of object detectors to adversarial attacks, very few defenses are known to date.  ...  Acknowledgments Goldstein and Chiang were supported by the DARPA GARD and DARPA QED4RML programs.  ... 
arXiv:2007.03730v4 fatcat:umctdapsb5batkqriwt7vyn54q
« Previous Showing results 1 — 15 out of 883 results