Filters








10,925 Hits in 5.3 sec

Robust Adversarial Perturbation on Deep Proposal-based Models [article]

Yuezun Li, Daniel Tian, Ming-Ching Chang, Xiao Bian, Siwei Lyu
2019 arXiv   pre-print
In this paper, we describe a robust adversarial perturbation (R-AP) method to attack deep proposal-based object detectors and instance segmentation algorithms.  ...  Adversarial noises are useful tools to probe the weakness of deep learning based computer vision algorithms.  ...  The investigation of adversarial perturbation on deep proposal-based models can lead to further understanding of the vulnerabilities of these widely applied methods.  ... 
arXiv:1809.05962v2 fatcat:njrzt7sixjculij267wmehkhie

Investigating Vulnerabilities of Deep Neural Policies [article]

Ezgi Korkmaz
2021 arXiv   pre-print
Recent work has proposed several methods to improve the robustness of deep reinforcement learning agents to adversarial perturbations based on training in the presence of these imperceptible perturbations  ...  Reinforcement learning policies based on deep neural networks are vulnerable to imperceptible adversarial perturbations to their inputs, in much the same way as neural network image classifiers.  ...  bounded perturbations, and proposes an approach based on self-play to gain robustness against such an adversary.  ... 
arXiv:2108.13093v1 fatcat:3yxmuhqz4ffb3foez64pgjv4qu

Regularizing deep networks using efficient layerwise adversarial training [article]

Swami Sankaranarayanan, Arpit Jain, Rama Chellappa, Ser Nam Lim
2018 arXiv   pre-print
We use these perturbations to train very deep models such as ResNets and show improvement in performance both on adversarial and original test data.  ...  Adversarial training has been shown to regularize deep neural networks in addition to increasing their robustness to adversarial examples.  ...  This research is based upon work supported by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via IARPA R&D Contract No. 2014-14071600012  ... 
arXiv:1705.07819v2 fatcat:rsedvkmzdjdufkkx5geggiz6wa

Towards Evaluating the Robustness of Deep Diagnostic Models by Adversarial Attack

Mengting Xu, Tao Zhang, Zhongnian Li, Mingxia Liu, Daoqiang Zhang
2021 Medical Image Analysis  
Deep learning models (with neural networks) have been widely used in challenging tasks such as computer-aided disease diagnosis based on medical images.  ...  Among all the factors that make the model not robust, the most serious one is adversarial examples.  ...  We further proposed two defense methods to enhance the robustness of these deep diagnostic models (Part III).  ... 
doi:10.1016/j.media.2021.101977 pmid:33550005 fatcat:dyyp4d24hvduto4gknjufups7e

Advances in adversarial attacks and defenses in computer vision: A survey [article]

Naveed Akhtar, Ajmal Mian, Navid Kardan, Mubarak Shah
2021 arXiv   pre-print
In [2], we reviewed the contributions made by the computer vision community in adversarial attacks on deep learning (and their defenses) until the advent of year 2018.  ...  Finally, this article discusses challenges and future outlook of this direction based on the literature reviewed herein and [2].  ...  [443] proposed a method to infer robust models for few-shot classification tasks based on adversarially robust meta-learners.  ... 
arXiv:2108.00401v2 fatcat:23gw74oj6bblnpbpeacpg3hq5y

Deep Image Restoration Model: A Defense Method Against Adversarial Attacks

Kazim Ali, Adnan N. Quershi, Ahmad Alauddin Bin Arifin, Muhammad Shahid Bhatti, Abid Sohail, Rohail Hassan
2022 Computers Materials & Continua  
We proved that our defense method against adversarial attacks based on a deep image restoration model is simple and state-of-the-art by providing strong experimental results evidence.  ...  In this scenario, we present a deep image restoration model that restores adversarial examples so that the target model is classified correctly again.  ...  The detector is used to detect adversarial perturbation, and the reformer is used to remove that perturbation to increase the robustness of the deep neural network model.  ... 
doi:10.32604/cmc.2022.020111 fatcat:yd5ocnn73zbovb2inytp2zvase

A survey in Adversarial Defences and Robustness in NLP [article]

Shreya Goyal, Sumanth Doddapaneni, Mitesh M.Khapra, Balaraman Ravindran
2022 arXiv   pre-print
In recent years, it has been seen that deep neural networks are lacking robustness and are likely to break in case of adversarial perturbations in input data.  ...  The proposed survey is an attempt to review different methods proposed for adversarial defenses in NLP in the recent past by proposing a novel taxonomy.  ...  They proposed novel, Curvature-based Robustness Certificate (CRC), that derived bounds on the curvature using the hessian of the deep network.  ... 
arXiv:2203.06414v2 fatcat:2ukd44px35e7ppskzkaprfw4ha

Recent Advances in Understanding Adversarial Robustness of Deep Neural Networks [article]

Tao Bai, Jinqi Luo, Jun Zhao
2020 arXiv   pre-print
Imperceptible perturbations applied on natural samples can lead DNN-based classifiers to output wrong prediction with fair confidence score.  ...  Adversarial examples are inevitable on the road of pervasive applications of deep neural networks (DNN).  ...  evaluated the recent state-of-art ImageNet-based DNN models on multiple robustness metrics.  ... 
arXiv:2011.01539v1 fatcat:e3o47epftbc2rebpdx5yotzriy

Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey [article]

Naveed Akhtar, Ajmal Mian
2018 arXiv   pre-print
For images, such perturbations are often too small to be perceptible, yet they completely fool the deep learning models.  ...  This article presents the first comprehensive survey on adversarial attacks on deep learning in Computer Vision.  ...  Kotler and Wong [96] proposed to learn ReLU-based classifier that show robustness against small adversarial perturbations.  ... 
arXiv:1801.00553v3 fatcat:xfk7togp5bhxvbxtwox3sckqq4

Robustness-via-Synthesis: Robust Training with Generative Adversarial Perturbations [article]

Inci M. Baytas, Debayan Deb
2021 arXiv   pre-print
Upon the discovery of adversarial attacks, robust models have become obligatory for deep learning-based systems.  ...  Experimental results show that the proposed approach attains comparable robustness with various gradient-based and generative robust training techniques on CIFAR10, CIFAR100, and SVHN datasets.  ...  Vatsa, “Un- adversarially robust generalization,” in Proceedings of the IEEE/CVF ravelling robustness of deep learning based face recognition against Conference on Computer  ... 
arXiv:2108.09713v1 fatcat:6t5okgu26rg4zlyiktb5am3wrq

Adversarial Attacks and Defense Technologies on Autonomous Vehicles: A Review

K. T. Y. Mahima, Mohamed Ayoob, Guhanathan Poravi
2021 Applied Computer Systems  
However, these machine learning models are vulnerable to targeted tensor perturbations called adversarial attacks, which limit the performance of the applications.  ...  Therefore, implementing defense models against adversarial attacks has become an increasingly critical research area.  ...  Based on the observations, they also proposed an adversarial training method to defend against attacks on segmentation models.  ... 
doi:10.2478/acss-2021-0012 fatcat:runxr47gzrb6znc4hygld36qgy

Adversarial defense for deep speaker recognition using hybrid adversarial training [article]

Monisankha Pal, Arindam Jati, Raghuveer Peri, Chin-Cheng Hsu, Wael AbdAlmageed, Shrikanth Narayanan
2020 arXiv   pre-print
To address this concern, in this work, we propose a new defense mechanism based on a hybrid adversarial training (HAT) setup.  ...  Deep neural network based speaker recognition systems can easily be deceived by an adversary using minuscule imperceptible perturbations to the input speech samples.  ...  Initial works on adversarial examples to attack a deep learning model have mainly focused on image classification [2, 3] .  ... 
arXiv:2010.16038v1 fatcat:qz3u3cnp7fajravb7elyxewfz4

DI-AA: An Interpretable White-box Attack for Fooling Deep Neural Networks [article]

Yixiang Wang, Jiqiang Liu, Xiaolin Chang, Jianhua Wang, Ricardo J. Rodríguez
2021 arXiv   pre-print
Experimental results reveal that our proposed approach can 1) attack non-robust models with comparatively low perturbation, where the perturbation is closer to or lower than the AutoAttack approach; 2)  ...  break the TRADES adversarial training models with the highest success rate; 3) the generated AE can reduce the robust accuracy of the robust black-box models by 16% to 31% in the black-box transfer attack  ...  Models. Performance. Based on the trained models, the TRADES adversarial training approach is further adopted to improve the robustness of the three models.  ... 
arXiv:2110.07305v1 fatcat:3nxurougt5bkpkrscckqbuydle

Minimum-Norm Adversarial Examples on KNN and KNN-Based Models [article]

Chawin Sitawarin, David Wagner
2020 arXiv   pre-print
In this work, we propose a gradient-based attack on kNN and kNN-based defenses, inspired by the previous work by Sitawarin & Wagner [1].  ...  We study the robustness against adversarial examples of kNN classifiers and classifiers that combine kNN with neural networks.  ...  ACKNOWLEDGEMENTS This work was supported by the Hewlett Foundation through the Center for Long-Term Cybersecurity and by generous gifts from Huawei, Google, and the Berkeley Deep Drive project.  ... 
arXiv:2003.06559v1 fatcat:ngbq63da6zad3hi3qjkktflndy

Can't Fool Me: Adversarially Robust Transformer for Video Understanding [article]

Divya Choudhary, Palash Goyal, Saurabh Sahu
2021 arXiv   pre-print
Deep neural networks have been shown to perform poorly on adversarial examples.  ...  We first show that simple extensions of image based adversarially robust models slightly improve the worst-case performance.  ...  Average Robustness for all models. Our proposed model A-ART is more robust to adversarial examples than both ART and non-ART base model.  ... 
arXiv:2110.13950v1 fatcat:sjmk6zapmjbxdalhjgw5qoepke
« Previous Showing results 1 — 15 out of 10,925 results