A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Filters
Real-time, Universal, and Robust Adversarial Attacks Against Speaker Recognition Systems
[article]
2020
arXiv
pre-print
In this paper, we propose the first real-time, universal, and robust adversarial attack against the state-of-the-art deep neural network (DNN) based speaker recognition system. ...
In addition, we improve the robustness of our attack by modeling the sound distortions caused by the physical over-the-air propagation through estimating room impulse response (RIR). ...
Thus, the generated adversarial perturbation needs to be robust enough to remain effective under this kind of real-world distortions. Threat Model. ...
arXiv:2003.02301v2
fatcat:vzv2zftbtrhuxnwjcnxx4ymmty
Adversarial Attacks and Defenses in Deep Learning: from a Perspective of Cybersecurity
2022
ACM Computing Surveys
Further, it is difficult to evaluate the real threat of adversarial attacks or the robustness of a deep learning model, as there are no standard evaluation methods. ...
Hence, with this paper, we review the literature to date. Additionally, we attempt to offer the first analysis framework for a systematic understanding of adversarial attacks. ...
Threat models against DNN To classify these attacks, we have speciied the threat model against DNN by introducing the critical components of the model attacks. ...
doi:10.1145/3547330
fatcat:d3x3oitysvb73ado5kuaqakgtu
Moiré Attack (MA): A New Potential Risk of Screen Photos
[article]
2021
arXiv
pre-print
In this paper, we find a special phenomenon in digital image processing, the moiré effect, that could cause unnoticed security threats to DNNs. ...
attack with the noise budget ϵ=4), high transferability rate across different models, and high robustness under various defenses. ...
We would like to thank Hanyue Lou at Peking University in the discussion at moiré phenomenon in daily life. ...
arXiv:2110.10444v1
fatcat:76ufml5xyngohcmpmuj7jjzyb4
Stealthy Attack on Algorithmic-Protected DNNs via Smart Bit Flipping
[article]
2021
arXiv
pre-print
To improve the robustness of DNNs, some algorithmic-based countermeasures against adversarial examples have been introduced thereafter. ...
In this paper, we propose a new type of stealthy attack on protected DNNs to circumvent the algorithmic defenses: via smart bit flipping in DNN weights, we can reserve the classification accuracy for clean ...
To quantify the robustness of a classifier f , we can find the
IV. ...
arXiv:2112.13162v1
fatcat:jf6up5vonrbhvipf2sknpczdpq
Are Malware Detection Classifiers Adversarially Vulnerable to Actor-Critic based Evasion Attacks?
2022
EAI Endorsed Transactions on Scalable Information Systems
However, the robustness of these models against well-crafted adversarial samples is not well investigated. ...
Thus malware detection models should be investigated for vulnerabilities and mitigated to enhance their overall forensic knowledge and adversarial robustness. ...
The ability to simulate real-world threat scenarios and develop mitigation strategies for the same makes threat modeling a vital exercise for any system. ...
doi:10.4108/eai.31-5-2022.174087
fatcat:42jftpdh35db7p4do6gnnlygny
Mind the Gap! A Study on the Transferability of Virtual vs Physical-world Testing of Autonomous Driving Systems
[article]
2021
arXiv
pre-print
real-world vehicle. ...
world, threatening the potential of existing testing solutions when applied to physical SDCs. ...
The DNN architecture is driving in the real world? ...
arXiv:2112.11255v1
fatcat:aia44y5s2vamnabgnofo3lxo2a
Realizable Universal Adversarial Perturbations for Malware
[article]
2022
arXiv
pre-print
Machine learning classifiers are vulnerable to adversarial examples -- input-specific perturbations that manipulate models' output. ...
Our experiments limit the effectiveness of a white box Android evasion attack to ~20% at the cost of ~3% TPR at 1% FPR. ...
Acknowledgments This research has been partially supported by the EC H2020 Project CONCORDIA (GA 830927) and the UK EP/L022710/2 and EP/P009301/1 EPSRC research grants. ...
arXiv:2102.06747v2
fatcat:2tlsyq3ojbdyviumrbvwzm7ipu
EnnCore: End-to-End Conceptual Guarding of Neural Architectures
2022
AAAI Conference on Artificial Intelligence
The EnnCore project addresses the fundamental security problem of guaranteeing safety, transparency, and robustness in neural-based architectures. ...
In this respect, EnnCore will pioneer the dialogue between contemporary explainable neural models and full-stack neural software verification. ...
Acknowledgment The work is funded by EPSRC grant EP/T026995/1 entitled "EnnCore: End-to-End Conceptual Guarding of Neural Architectures" under Security for all in an AI enabled society. Prof. ...
dblp:conf/aaai/ManinoCDRSMFBL022
fatcat:weaswjcuwjeslhv443vrgouywe
Adversarial Attack on Attackers: Post-Process to Mitigate Black-Box Score-Based Query Attacks
[article]
2022
arXiv
pre-print
The score-based query attacks (SQAs) pose practical threats to deep neural networks by crafting adversarial perturbations within dozens of queries, only using the model's output scores. ...
In this way, (1) SQAs are prevented regardless of the model's worst-case robustness; (2) the original model predictions are hardly changed, i.e., no degradation on clean accuracy; (3) the calibration of ...
As a defense in real-world applications, AAA greatly mitigates the adversarial threat without requiring huge computational burden. ...
arXiv:2205.12134v1
fatcat:4b3tqbamn5e4pk7n5ppnimyo2i
DeepGauge: multi-granularity testing criteria for deep learning systems
2018
Proceedings of the 33rd ACM/IEEE International Conference on Automated Software Engineering - ASE 2018
However, a plethora of studies have shown that the state-of-the-art DL systems suffer from various vulnerabilities which can lead to severe consequences when applied to real-world applications. ...
potentially hinder its real-world deployment. ...
We gratefully acknowledge the support of NVIDIA AI Tech Center (NVAITC) to our research. We also appreciate the anonymous reviewers for their insightful and constructive comments. ...
doi:10.1145/3238147.3238202
dblp:conf/kbse/MaJZSXLCSLLZW18
fatcat:uvf2ugxmnnhblk5lhumzlagpu4
Understanding Local Robustness of Deep Neural Networks under Natural Variations
[chapter]
2021
Lecture Notes in Computer Science
) tool to automatically identify the non-robust points. ...
This work aims to bridge this gap.To this end, we study the local per-input robustness properties of the DNNs and leverage those properties to build a white-box (DeepRobust-W) and a black-box (DeepRobust-B ...
Usage Scenario DeepRobust-W/B works in a real-world setting where a customer/user runs a pre-trained DNN model in real-time which constantly receives inputs and wants to test if the prediction of the DNN ...
doi:10.1007/978-3-030-71500-7_16
fatcat:lp6tjzbjkvbhzhw2d45qyetyry
The RFML Ecosystem: A Look at the Unique Challenges of Applying Deep Learning to Radio Frequency Applications
[article]
2020
arXiv
pre-print
deep machine learning systems in real-world wireless communication applications. ...
A major driver for the usage of deep machine learning in the context of wireless communications is that little, to no, a priori knowledge of the intended spectral environment is required, given that there ...
Finally, while not as optimal as real world data, augmented datasets aim to provide a "best of both worlds" approach by minimizing the limitations of synthetic datasets (i.e. real-world model accuracy) ...
arXiv:2010.00432v1
fatcat:mxnvorh5wrfwzmxg4ezpbj4xve
Is Approximation Universally Defensive Against Adversarial Attacks in Deep Neural Networks?
[article]
2021
arXiv
pre-print
Towards this, we present an extensive adversarial robustness analysis of different approximate DNN accelerators (AxDNNs) using the state-of-the-art approximate multipliers. ...
Very recently, the inexact nature of approximate components, such as approximate multipliers have also been reported successful in defending adversarial attacks on DNNs models. ...
THREAT MODEL In this section, a threat model is presented for exploring the adversarial robustness of AxDNNs.
A. ...
arXiv:2112.01555v1
fatcat:eopeppoqaffm3ldhxwqhb3gelm
PRoA: A Probabilistic Robustness Assessment against Functional Perturbations
[article]
2022
arXiv
pre-print
and frequently occurring in the real world. ...
However, existing robustness verification methods are not sufficiently practical for deploying machine learning systems in the real world. ...
systems in the real world. ...
arXiv:2207.02036v1
fatcat:frl7wskdmjhw5inedib46w33d4
Universal Adversarial Perturbations for Speech Recognition Systems
2019
Interspeech 2019
The existence of such perturbations poses a threat to machine learning models in real world settings since the adversary may simply add the same pre-computed universal perturbation to a new image and cause ...
The existence of universal adversarial perturbations (described below) can pose a more serious threat to ASR systems in real-world settings since the adversary may simply add the same pre-computed universal ...
doi:10.21437/interspeech.2019-1353
dblp:conf/interspeech/NeekharaHPDMK19
fatcat:wzfjggj3j5bsxgbxql7zkchlyi
« Previous
Showing results 1 — 15 out of 763 results