A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Filters
Using Depth for Pixel-Wise Detection of Adversarial Attacks in Crowd Counting
[article]
2020
arXiv
pre-print
However, attack and defense mechanisms have been virtually unexplored in regression tasks, let alone for crowd density estimation. ...
While effective, deep learning approaches are vulnerable to adversarial attacks, which, in a crowd-counting context, can lead to serious security issues. ...
In this paper, we therefore focus on attacks against these. Defense against Adversarial Attacks. ...
arXiv:1911.11484v2
fatcat:xttsj4fotbaopg3easr7eb3oti
Segment and Complete: Defending Object Detectors against Adversarial Patch Attacks with Robust Patch Detection
[article]
2022
arXiv
pre-print
Developing reliable defenses for object detectors against patch attacks is critical but severely understudied. ...
In this paper, we propose Segment and Complete defense (SAC), a general framework for defending object detectors against patch attacks through detection and removal of adversarial patches. ...
We adopt a "detect and remove" strategy for defending object detectors against patch attacks. ...
arXiv:2112.04532v2
fatcat:aac4akekazbbxftcxsit5yo35q
Beyond Digital Domain: Fooling Deep Learning Based Recognition System in Physical World
2020
PROCEEDINGS OF THE THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE AND THE TWENTY-EIGHTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE
Consider the potential defense mechanisms our adversarial objects may encounter, we conduct a series of experiments to evaluate the effect of existing defense methods on our physical attack. ...
Some prior works are proposed to launch physical adversarial attack against object detection models, but limited by certain aspects. ...
and backgrounds. • We successfully launch the physical adversarial attacks against DNN models applied for object detection. ...
doi:10.1609/aaai.v34i01.5459
fatcat:d3cw2pj4ajfolkbtttq5pzbn5q
APRICOT: A Dataset of Physical Adversarial Attacks on Object Detection
[article]
2020
arXiv
pre-print
This dataset and the described experiments provide a benchmark for future research on the effectiveness of and defenses against physical adversarial objects in the wild. ...
Physical adversarial attacks threaten to fool object detection systems, but reproducible research on the real-world effectiveness of physical patches and how to defend against them requires a publicly ...
Acknowledgments We would like to thank Mikel Rodriguez, David Jacobs, Rama Chellappa, and Abhinav Shrivastava for helpful discussions and feedback on this work. ...
arXiv:1912.08166v2
fatcat:b3uekiyoinf6zl2skwgbp2sgg4
FaceGuard: A Self-Supervised Defense Against Adversarial Face Images
[article]
2021
arXiv
pre-print
Prevailing defense mechanisms against adversarial face images tend to overfit to the adversarial perturbations in the training set and fail to generalize to unseen adversarial attacks. ...
We propose a new self-supervised adversarial defense framework, namely FaceGuard, that can automatically detect, localize, and purify a wide variety of adversarial faces without utilizing pre-computed ...
Without utilizing any pre-computed training samples from known adversarial attacks, the proposed FaceGuard achieved state-of-the-art detection performance against 6 different adversarial attacks. ...
arXiv:2011.14218v2
fatcat:zb3kpxp77fcfpe5f3x7j32vxde
Real-time Detection of Practical Universal Adversarial Perturbations
[article]
2021
arXiv
pre-print
Universal Adversarial Perturbations (UAPs) are a prominent class of adversarial examples that exploit the systemic vulnerabilities and enable physically realizable and robust attacks against Deep Neural ...
HyperNeuron is able to simultaneously detect both adversarial mask and patch UAPs with comparable or better performance than existing UAP defenses whilst introducing a significantly reduced latency of ...
We use the pre-trained object detector made available by Tramer et al. [41] , which achieves an 80% true positive rate for detecting ads on our test set of 225 web page screenshots. Attacks. ...
arXiv:2105.07334v2
fatcat:cslp4n6h6rgq5nj7oay4ognqcy
Adversarial Examples in Modern Machine Learning: A Review
[article]
2019
arXiv
pre-print
We explore a variety of adversarial attack methods that apply to image-space content, real world adversarial attacks, adversarial defenses, and the transferability property of adversarial examples. ...
We also discuss strengths and weaknesses of various methods of adversarial attack and defense. ...
objective
EOT [96]
W
T, NT
Good for creating physical adversaries and fooling randomization defenses
BPDA [55]
W
T, NT
Can fool various gradient masking defenses
SPSA [97]
B
T, NT ...
arXiv:1911.05268v2
fatcat:majzak4sqbhcpeahghh6sm3dwq
DetectorGuard: Provably Securing Object Detectors against Localized Patch Hiding Attacks
[article]
2021
arXiv
pre-print
The patch attacker can carry out a physical-world attack by printing and attaching an adversarial patch to the victim object. ...
State-of-the-art object detectors are vulnerable to localized patch hiding attacks, where an adversary introduces a small adversarial patch to make detectors miss the detection of salient objects. ...
ACKNOWLEDGMENTS We are grateful to Gagandeep Singh for shepherding the paper and anonymous reviewers at CCS 2021 for their valuable feedback. ...
arXiv:2102.02956v3
fatcat:ds2f6iffmnb47otjoa43q4drne
Seeing isn't Believing: Practical Adversarial Attack Against Object Detectors
[article]
2019
arXiv
pre-print
In this paper, we presented systematic solutions to build robust and practical AEs against real world object detectors. ...
(AA), we proposed the nested-AE, which combines two AEs together to attack object detectors in both long and short distance. ...
Therefore, we study the defense mechanisms of adversarial attacks against image classifiers, and discuss the possibility of applying such defense solutions for object detectors. ...
arXiv:1812.10217v3
fatcat:d3g4aui2zjgqrjmdhsjtnna3dq
Unified Detection of Digital and Physical Face Attacks
[article]
2021
arXiv
pre-print
State-of-the-art defense mechanisms against face attacks achieve near perfect accuracies within one of three attack categories, namely adversarial, digital manipulation, or physical spoofs, however, they ...
Using a multi-task learning framework along with k-means clustering, UniFAD learns joint representations for coherent attacks, while uncorrelated attack types are learned separately. ...
On the other hand, majority of the proposed defenses against digital manipulation, fine-tune a pre-trained JointCNN (e.g., Xception [9] ) on bona fide faces and all available digital manipulation attacks ...
arXiv:2104.02156v1
fatcat:2ahwj4baonh5hdbz5tpbacimee
Sensor Adversarial Traits: Analyzing Robustness of 3D Object Detection Sensor Fusion Models
[article]
2021
arXiv
pre-print
After identifying the underlying reason, we explore some potential defenses and provide some recommendations for improved sensor fusion models. ...
the use of additional sensors automatically mitigate the risk of adversarial attacks. ...
ACKNOWLEDGEMENTS We thank our reviewers for their comments. We also wish to thank ...
arXiv:2109.06363v1
fatcat:7vo4vhlhzjflrguyxrcajuddcu
Defending Against Multiple and Unforeseen Adversarial Videos
[article]
2021
arXiv
pre-print
In this paper, we propose one of the first defense strategies against multiple types of adversarial videos for video recognition. ...
attacks and physically realizable attacks. ...
For our adversarial video detector, we choose the lightweight 3D ResNet-18. We use the pre-trained weights from [6] and conduct adversarial training upon the pre-trained models. ...
arXiv:2009.05244v3
fatcat:4wtxmwzkeja3bk75iipdixsuqq
Adversarial Examples on Object Recognition: A Comprehensive Survey
[article]
2020
arXiv
pre-print
We start by introducing the hypotheses behind their existence, the methods used to construct or protect against them, and the capacity to transfer adversarial examples between different machine learning ...
Altogether, the goal is to provide a comprehensive and self-contained survey of this growing field of research. ...
[28] proposed a pre-processing technique that can successfully mask the gradients, even for iterative attackers. ...
arXiv:2008.04094v2
fatcat:7xycyybhpvhshawt7fy3fzeana
Defending Against Universal Attacks Through Selective Feature Regeneration
[article]
2020
arXiv
pre-print
We show that without any additional modification, our defense trained on ImageNet with one type of universal attack examples effectively defends against other types of unseen universal attacks. ...
Departing from existing defense strategies that work mostly in the image domain, we present a novel defense which operates in the DNN feature domain and effectively defends against such universal perturbations ...
Furthermore, such perturbations have been successfully placed in a real-world scene via physical adversarial objects [3, 12, 26] , thus posing a security risk. ...
arXiv:1906.03444v4
fatcat:3iunconapfdptee7lntsnspwae
Fooling the Eyes of Autonomous Vehicles: Robust Physical Adversarial Examples Against Traffic Sign Recognition Systems
[article]
2022
arXiv
pre-print
In this paper, we propose a systematic pipeline to generate robust physical AEs against real-world object detectors. Robustness is achieved in three ways. ...
The attacks have good transferability and can deceive other state-of-the-art object detectors. We launched HA and NTA on a brand-new 2021 model vehicle. ...
defense mechanism against adversarial attacks. ...
arXiv:2201.06192v1
fatcat:muago3siizgevcgwi5kpm6lxwa
« Previous
Showing results 1 — 15 out of 249 results