Filters








1,534 Hits in 9.9 sec

Exploiting the Inherent Limitation of L0 Adversarial Examples [article]

Fei Zuo, Bokai Yang, Xiaopeng Li, Lannan Luo, Qiang Zeng
2019 arXiv   pre-print
The main novelty of the proposed detector is that we convert the AE detection problem into a comparison problem by exploiting the inherent limitation of L0 attacks.  ...  Thus, our system, called AEPECKER, demonstrates not only high AE detection accuracies, but also a notable capability to correct the classification results.  ...  Discussion First, the proposed technique is not a panacea for detecting or defending against all possible attacks.  ... 
arXiv:1812.09638v3 fatcat:35tmtvgjyjd3zpep5d3a3q5age

Multitask adversarial attack with dispersion amplification

Pavlo Haleta, Dmytro Likhomanov, Oleksandra Sokol
2021 EURASIP Journal on Information Security  
To attack such a system, adversarial example has to pass through many distinct networks at once, which is the major challenge addressed by this paper.  ...  AbstractRecently, adversarial attacks have drawn the community's attention as an effective tool to degrade the accuracy of neural networks. However, their actual usage in the world is limited.  ...  Acknowledgements Not applicable.  ... 
doi:10.1186/s13635-021-00124-3 fatcat:damhnwplznglngm7w4kusxcd4u

Defending Against Person Hiding Adversarial Patch Attack with a Universal White Frame [article]

Youngjoon Yu, Hong Joo Lee, Hakmin Lee, Yong Man Ro
2022 arXiv   pre-print
Although it is necessary to defend against an adversarial patch attack, very few efforts have been dedicated to defending against person-hiding attacks.  ...  Although these detection networks show high performance, they are vulnerable to adversarial patch attacks.  ...  Recent works demonstrate that the adversarial patches can fool the object detection network in the physical world.  ... 
arXiv:2204.13004v1 fatcat:lz4t4f4qardjxm2uyh3vbptvku

Adversarial Attacks and Defences Competition [article]

Alexey Kurakin, Ian Goodfellow, Samy Bengio, Yinpeng Dong, Fangzhou Liao, Ming Liang, Tianyu Pang, Jun Zhu, Xiaolin Hu, Cihang Xie, Jianyu Wang, Zhishuai Zhang, Zhou Ren (+9 others)
2018 arXiv   pre-print
adversarial examples as well as to develop new ways to defend against them.  ...  To accelerate research on adversarial examples and robustness of machine learning classifiers, Google Brain organized a NIPS 2017 competition that encouraged researchers to develop new methods to generate  ...  This approach works long as the attacker is unaware of the detector or the attack is not strong enough.  ... 
arXiv:1804.00097v1 fatcat:f7ztv2gb7vbb3lrsc2fg7fwuna

Security Analysis of Camera-LiDAR Fusion Against Black-Box Attacks on Autonomous Vehicles [article]

R. Spencer Hallyburton, Yupei Liu, Yulong Cao, Z. Morley Mao, Miroslav Pajic
2022 arXiv   pre-print
Sensor fusion with multi-frame tracking is becoming increasingly popular for detecting 3D objects.  ...  In addition, we demonstrate that the frustum attack is stealthy to existing defenses against LiDAR spoofing as it preserves consistencies between camera and LiDAR semantics.  ...  Acknowledgements This work is sponsored in part by the ONR under agreements N00014-17-1-2504 and N00014-20-1-2745, AFOSR under award number FA9550-19-1-0169, as well as the NSF CNS-1652544 award.  ... 
arXiv:2106.07098v4 fatcat:onalm73kcjdunoeqbzv4sscj7m

Generalizable Data-free Objective for Crafting Universal Adversarial Perturbations

2018 IEEE Transactions on Pattern Analysis and Machine Intelligence  
In the practical setting of black-box attack scenario (when the attacker does not have access to the target model and it's training data), we show that our objective outperforms the data dependent objectives  ...  Independent of the underlying task, our objective achieves fooling via corrupting the extracted features at multiple layers.  ...  Therefore, existing procedures can not craft perturbations when enough data is not provided. • Weaker black-box performance: Since information about the target models is generally not available for attackers  ... 
doi:10.1109/tpami.2018.2861800 pmid:30072314 fatcat:d4nf5qoujjdhhfayngf4736kbq

Generalizable Data-free Objective for Crafting Universal Adversarial Perturbations [article]

Konda Reddy Mopuri, Aditya Ganeshan, R. Venkatesh Babu
2018 arXiv   pre-print
In the practical setting of black-box attack scenario (when the attacker does not have access to the target model and it's training data), we show that our objective outperforms the data dependent objectives  ...  Independent of the underlying task, our objective achieves fooling via corrupting the extracted features at multiple layers.  ...  Therefore, existing procedures can not craft perturbations when enough data is not provided. • Weaker black-box performance: Since information about the target models is generally not available for attackers  ... 
arXiv:1801.08092v3 fatcat:vmmqjwzmxfc43ghedz7sahefdu

Adversarial Examples in Modern Machine Learning: A Review [article]

Rey Reza Wiyatno, Anqi Xu, Ousmane Dia, Archy de Berker
2019 arXiv   pre-print
Our aim is to provide an extensive coverage of the field, furnishing the reader with an intuitive understanding of the mechanics of adversarial attack and defense mechanisms and enlarging the community  ...  We explore a variety of adversarial attack methods that apply to image-space content, real world adversarial attacks, adversarial defenses, and the transferability property of adversarial examples.  ...  , or multiple region proposals for object detection tasks.  ... 
arXiv:1911.05268v2 fatcat:majzak4sqbhcpeahghh6sm3dwq

Deep Text Classification Can be Fooled

Bin Liang, Hongcheng Li, Miaoqiang Su, Pan Bian, Xirong Li, Wenchang Shi
2018 Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence  
The adversarial samples can be perturbed to any desirable classes without compromising their utilities. At the same time, the introduced perturbation is difficult to be perceived.  ...  The experiment results show that the adversarial samples generated by our method can successfully fool both state-of-the-art character-level and word-level DNN-based text classifiers.  ...  The work is supported by National Natural Science Foundation of China (NSFC) under grants 91418206, 61672523, and 61472429, National Science and Technology Major Project of China under grant 2012ZX01039  ... 
doi:10.24963/ijcai.2018/585 dblp:conf/ijcai/0002LSBLS18 fatcat:tw6xx55rkrgldmhwvodygkuvye

Adversarial Example Detection for DNN Models: A Review and Experimental Comparison [article]

Ahmed Aldahdooh, Wassim Hamidouche, Sid Ahmed Fezza, Olivier Deforges
2021 arXiv   pre-print
Among these challenges are the defense against or/and the detection of the adversarial examples (AEs).  ...  The aim of AE is to fool the DL model which makes it a potential risk for DL applications.  ...  Acknowledgement The project is funded by both Région Bretagne (Brittany region), France, and direction générale de l'armement (DGA).  ... 
arXiv:2105.00203v3 fatcat:p4udtl4kkvbrxijrsp6g3hr27e

Adversarial Attacks against Face Recognition: A Comprehensive Study [article]

Fatemeh Vakhshiteh, Ahmad Nickabadi, Raghavendra Ramachandra
2021 arXiv   pre-print
In an advanced FR system with deep learning-based architecture, however, promoting the recognition efficiency alone is not sufficient, and the system should also withstand potential kinds of attacks designed  ...  In this article, we present a comprehensive survey on adversarial attacks against FR systems and elaborate on the competence of new countermeasures against them.  ...  The detection approaches included in this verification systems, this becomes crucial since an adversary can always use an attack with a relatively higher perturbation that is just enough to deceive the  ... 
arXiv:2007.11709v3 fatcat:jfhcxj6hp5esvcclf2dsehfad4

Trust and Security in RFID-Based Product Authentication Systems

Mikko O. Lehtonen, Florian Michahelles, Elgar Fleisch
2007 IEEE Systems Journal  
Product authentication is needed to detect counterfeit products and to prevent them from entering the distribution channels of genuine products.  ...  The benefits of implementing a service that detects the cloned tags in the level of the network's core services are identified.  ...  engineering) or even through threatening and blackmailing (rubber-hose cryptanalysis). 3) Attack Against RF Communication: Also, an attack against the radio-frequency (RF) communication can fool the product  ... 
doi:10.1109/jsyst.2007.909820 fatcat:wtpav4ncojbo3ejfo7qgzwymqe

Privacy and Security Issues in Deep Learning: A Survey

Ximeng Liu, Lehui Xie, Yaopeng Wang, Jian Zou, Jinbo Xiong, Zuobin Ying, Athanasios V. Vasilakos
2020 IEEE Access  
INDEX TERMS Deep learning, DL privacy, DL security, model extraction attack, model inversion attack, adversarial attack, poisoning attack, adversarial defense, privacy-preserving.  ...  Besides, the recent works have found that the DL model is vulnerable to adversarial examples perturbed by imperceptible noised, which can lead the DL model to predict wrongly with high confidence.  ...  The disadvantage of stateful detection methods is that it cannot defend against transfer attacks that do not require any query.  ... 
doi:10.1109/access.2020.3045078 fatcat:kbpqgmbg4raerc6txivacpgcia

Optimization-Guided Binary Diversification to Mislead Neural Networks for Malware Detection [article]

Mahmood Sharif, Keane Lucas, Lujo Bauer, Michael K. Reiter, Saurabh Shintre
2019 arXiv   pre-print
However, these defenses may still be susceptible to evasion by adaptive attackers, and so we advocate for augmenting malware-detection systems with methods that do not rely on machine learning.  ...  Moreover, we found that our attack can fool some commercial anti-viruses, in certain cases with a success rate of 85%.  ...  Nevertheless, adaptive adversaries remain a risk, and we recommend the deployment of multiple detection algorithms, including ones not based on ML, to raise the bar against such adversaries.  ... 
arXiv:1912.09064v1 fatcat:ig5pvocysbhjjbdmka4c7xeqqe

Adversarial Sensor Attack on LiDAR-based Perception in Autonomous Driving [article]

Yulong Cao, Chaowei Xiao, Benjamin Cyr, Yimeng Zhou, Won Park, Sara Rampazzi, Qi Alfred Chen, Kevin Fu, Z. Morley Mao
2019 arXiv   pre-print
We find that blindly applying LiDAR spoofing is insufficient to achieve this goal due to the machine learning-based object detection process.  ...  Thus, we then explore the possibility of strategically controlling the spoofed attack to fool the machine learning model.  ...  To perform laser aiming in these scenarios, the attacker can use techniques such as camera-based object detection and tracking.  ... 
arXiv:1907.06826v1 fatcat:mnqjpnuudvfqpdjjctrkc624he
« Previous Showing results 1 — 15 out of 1,534 results