A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Filters
Robust Adversarial Perturbation on Deep Proposal-based Models
[article]
2019
arXiv
pre-print
In this paper, we describe a robust adversarial perturbation (R-AP) method to attack deep proposal-based object detectors and instance segmentation algorithms. ...
Adversarial noises are useful tools to probe the weakness of deep learning based computer vision algorithms. ...
The investigation of adversarial perturbation on deep proposal-based models can lead to further understanding of the vulnerabilities of these widely applied methods. ...
arXiv:1809.05962v2
fatcat:njrzt7sixjculij267wmehkhie
Investigating Vulnerabilities of Deep Neural Policies
[article]
2021
arXiv
pre-print
Recent work has proposed several methods to improve the robustness of deep reinforcement learning agents to adversarial perturbations based on training in the presence of these imperceptible perturbations ...
Reinforcement learning policies based on deep neural networks are vulnerable to imperceptible adversarial perturbations to their inputs, in much the same way as neural network image classifiers. ...
bounded perturbations, and proposes an approach based on self-play to gain robustness against such an adversary. ...
arXiv:2108.13093v1
fatcat:3yxmuhqz4ffb3foez64pgjv4qu
Regularizing deep networks using efficient layerwise adversarial training
[article]
2018
arXiv
pre-print
We use these perturbations to train very deep models such as ResNets and show improvement in performance both on adversarial and original test data. ...
Adversarial training has been shown to regularize deep neural networks in addition to increasing their robustness to adversarial examples. ...
This research is based upon work supported by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via IARPA R&D Contract No. 2014-14071600012 ...
arXiv:1705.07819v2
fatcat:rsedvkmzdjdufkkx5geggiz6wa
Towards Evaluating the Robustness of Deep Diagnostic Models by Adversarial Attack
2021
Medical Image Analysis
Deep learning models (with neural networks) have been widely used in challenging tasks such as computer-aided disease diagnosis based on medical images. ...
Among all the factors that make the model not robust, the most serious one is adversarial examples. ...
We further proposed two defense methods to enhance the robustness of these deep diagnostic models (Part III). ...
doi:10.1016/j.media.2021.101977
pmid:33550005
fatcat:dyyp4d24hvduto4gknjufups7e
Advances in adversarial attacks and defenses in computer vision: A survey
[article]
2021
arXiv
pre-print
In [2], we reviewed the contributions made by the computer vision community in adversarial attacks on deep learning (and their defenses) until the advent of year 2018. ...
Finally, this article discusses challenges and future outlook of this direction based on the literature reviewed herein and [2]. ...
[443] proposed a method to infer robust models for few-shot classification tasks based on adversarially robust meta-learners. ...
arXiv:2108.00401v2
fatcat:23gw74oj6bblnpbpeacpg3hq5y
Deep Image Restoration Model: A Defense Method Against Adversarial Attacks
2022
Computers Materials & Continua
We proved that our defense method against adversarial attacks based on a deep image restoration model is simple and state-of-the-art by providing strong experimental results evidence. ...
In this scenario, we present a deep image restoration model that restores adversarial examples so that the target model is classified correctly again. ...
The detector is used to detect adversarial perturbation, and the reformer is used to remove that perturbation to increase the robustness of the deep neural network model. ...
doi:10.32604/cmc.2022.020111
fatcat:yd5ocnn73zbovb2inytp2zvase
A survey in Adversarial Defences and Robustness in NLP
[article]
2022
arXiv
pre-print
In recent years, it has been seen that deep neural networks are lacking robustness and are likely to break in case of adversarial perturbations in input data. ...
The proposed survey is an attempt to review different methods proposed for adversarial defenses in NLP in the recent past by proposing a novel taxonomy. ...
They proposed novel, Curvature-based Robustness Certificate (CRC), that derived bounds on the curvature using the hessian of the deep network. ...
arXiv:2203.06414v2
fatcat:2ukd44px35e7ppskzkaprfw4ha
Recent Advances in Understanding Adversarial Robustness of Deep Neural Networks
[article]
2020
arXiv
pre-print
Imperceptible perturbations applied on natural samples can lead DNN-based classifiers to output wrong prediction with fair confidence score. ...
Adversarial examples are inevitable on the road of pervasive applications of deep neural networks (DNN). ...
evaluated the recent state-of-art ImageNet-based DNN models on multiple robustness metrics. ...
arXiv:2011.01539v1
fatcat:e3o47epftbc2rebpdx5yotzriy
Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey
[article]
2018
arXiv
pre-print
For images, such perturbations are often too small to be perceptible, yet they completely fool the deep learning models. ...
This article presents the first comprehensive survey on adversarial attacks on deep learning in Computer Vision. ...
Kotler and Wong [96] proposed to learn ReLU-based classifier that show robustness against small adversarial perturbations. ...
arXiv:1801.00553v3
fatcat:xfk7togp5bhxvbxtwox3sckqq4
Robustness-via-Synthesis: Robust Training with Generative Adversarial Perturbations
[article]
2021
arXiv
pre-print
Upon the discovery of adversarial attacks, robust models have become obligatory for deep learning-based systems. ...
Experimental results show that the proposed approach attains comparable robustness with various gradient-based and generative robust training techniques on CIFAR10, CIFAR100, and SVHN datasets. ...
Vatsa, “Un- adversarially robust generalization,” in Proceedings of the IEEE/CVF
ravelling robustness of deep learning based face recognition against Conference on Computer ...
arXiv:2108.09713v1
fatcat:6t5okgu26rg4zlyiktb5am3wrq
Adversarial Attacks and Defense Technologies on Autonomous Vehicles: A Review
2021
Applied Computer Systems
However, these machine learning models are vulnerable to targeted tensor perturbations called adversarial attacks, which limit the performance of the applications. ...
Therefore, implementing defense models against adversarial attacks has become an increasingly critical research area. ...
Based on the observations, they also proposed an adversarial training method to defend against attacks on segmentation models. ...
doi:10.2478/acss-2021-0012
fatcat:runxr47gzrb6znc4hygld36qgy
Adversarial defense for deep speaker recognition using hybrid adversarial training
[article]
2020
arXiv
pre-print
To address this concern, in this work, we propose a new defense mechanism based on a hybrid adversarial training (HAT) setup. ...
Deep neural network based speaker recognition systems can easily be deceived by an adversary using minuscule imperceptible perturbations to the input speech samples. ...
Initial works on adversarial examples to attack a deep learning model have mainly focused on image classification [2, 3] . ...
arXiv:2010.16038v1
fatcat:qz3u3cnp7fajravb7elyxewfz4
DI-AA: An Interpretable White-box Attack for Fooling Deep Neural Networks
[article]
2021
arXiv
pre-print
Experimental results reveal that our proposed approach can 1) attack non-robust models with comparatively low perturbation, where the perturbation is closer to or lower than the AutoAttack approach; 2) ...
break the TRADES adversarial training models with the highest success rate; 3) the generated AE can reduce the robust accuracy of the robust black-box models by 16% to 31% in the black-box transfer attack ...
Models. Performance. Based on the trained models, the TRADES adversarial training approach is further adopted to improve the robustness of the three models. ...
arXiv:2110.07305v1
fatcat:3nxurougt5bkpkrscckqbuydle
Minimum-Norm Adversarial Examples on KNN and KNN-Based Models
[article]
2020
arXiv
pre-print
In this work, we propose a gradient-based attack on kNN and kNN-based defenses, inspired by the previous work by Sitawarin & Wagner [1]. ...
We study the robustness against adversarial examples of kNN classifiers and classifiers that combine kNN with neural networks. ...
ACKNOWLEDGEMENTS This work was supported by the Hewlett Foundation through the Center for Long-Term Cybersecurity and by generous gifts from Huawei, Google, and the Berkeley Deep Drive project. ...
arXiv:2003.06559v1
fatcat:ngbq63da6zad3hi3qjkktflndy
Can't Fool Me: Adversarially Robust Transformer for Video Understanding
[article]
2021
arXiv
pre-print
Deep neural networks have been shown to perform poorly on adversarial examples. ...
We first show that simple extensions of image based adversarially robust models slightly improve the worst-case performance. ...
Average Robustness for all models. Our proposed model A-ART is more robust to adversarial examples than both ART and non-ART base model. ...
arXiv:2110.13950v1
fatcat:sjmk6zapmjbxdalhjgw5qoepke
« Previous
Showing results 1 — 15 out of 10,925 results