A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Filters
Hardware/Software Obfuscation against Timing Side-channel Attack on a GPU
[article]
2020
arXiv
pre-print
In this paper, our attack model is a coalescing attack, which leverages a critical GPU microarchitectural feature -- the coalescing unit. ...
In this paper, a series of hardware/software countermeasures are proposed to obfuscate the memory timing side channel, making the GPU more resilient without impacting performance. ...
Side-channel attack methods can be used to attack different table-based algorithms. Therefore we need a more comprehensive defense strategy. ...
arXiv:2007.16175v1
fatcat:oyuvuw3kbffd5kb2mpzveuyb5y
JumpReLU: A Retrofit Defense Strategy for Adversarial Attacks
[article]
2019
arXiv
pre-print
These training strategies are very expensive, in both human and computational time. ...
To complement these approaches, we propose a very simple and inexpensive strategy which can be used to "retrofit" a previously-trained network to improve its resilience to adversarial attacks. ...
ACKNOWLEDGMENTS We would like to acknowledge ARO, DARPA, NSF, and ONR for providing partial support for this work. We would also like to acknowledge Amazon for providing AWS credits for this project. ...
arXiv:1904.03750v1
fatcat:w7jmyljosfb7bf3pvfbrm3lpgu
Face-Off: Adversarial Face Obfuscation
[article]
2020
arXiv
pre-print
To realize Face-Off, we overcome a set of challenges related to the black-box nature of commercial face recognition services, and the scarcity of literature for adversarial attacks on metric networks. ...
We implement and evaluate Face-Off to find that it deceives three commercial face recognition services from Microsoft, Amazon, and Face++. ...
Focusing on standard image classifiers, the authors formulate game theory problem between an obfuscator and an attacker. ...
arXiv:2003.08861v2
fatcat:m5q3gdbjyvgprioef5ozh5noku
Privacy-preserving Machine Learning through Data Obfuscation
[article]
2018
arXiv
pre-print
Specifically we introduce an obfuscate function and apply it to the training data before feeding them to the model training task. ...
Meanwhile the model trained from the obfuscated dataset can still achieve high accuracy. ...
Defense We use the same defense strategy of model memorization attack to prevent the adversary from inferring the membership of specific samples. ...
arXiv:1807.01860v2
fatcat:6ou5zgahm5gcxj56k5uxuijjbu
Mitigating Advanced Adversarial Attacks with More Advanced Gradient Obfuscation Techniques
[article]
2020
arXiv
pre-print
Recently, advanced gradient-based attack techniques were proposed (e.g., BPDA and EOT), which have defeated a considerable number of existing defense methods. ...
Extensive evaluations indicate that our solutions can effectively mitigate all existing standard and advanced attack techniques, and beat 11 state-of-the-art defense solutions published in top-tier conferences ...
Since then, new defense strategies were introduced to increase the difficulty of AE generations by obfuscating the gradients. ...
arXiv:2005.13712v1
fatcat:xgm3agbdxrco5fultvqhlonoru
Set-based Obfuscation for Strong PUFs against Machine Learning Attacks
[article]
2019
arXiv
pre-print
However, these defenses incur high hardware overhead, degenerate reliability and are inefficient against advanced machine learning attacks such as approximation attacks. ...
In order to resist such attack, many defenses have been proposed in recent years. ...
strategy (CMA-ES) and recently proposed approximation attacks [7] . ...
arXiv:1806.02011v4
fatcat:rclgs6eozzhbxky7fzwqhkf4ui
Hiding Behind Backdoors: Self-Obfuscation Against Generative Models
[article]
2022
arXiv
pre-print
Building on this work, we illustrate the self-obfuscation attack: attackers target a pre-processing model in the system, and poison the training set of generative models to obfuscate a specific class during ...
Our contribution is to describe, implement and evaluate a generalized attack, in the hope of raising awareness regarding the challenge of architectural robustness within the machine learning community. ...
Our implementation with Tensorflow on Nvidia RTX2080 GPUs is made available 1 . Results. Table 1 summarizes the evaluated strategies. ...
arXiv:2201.09774v1
fatcat:iexoeg3rajb7nmjvqu3hqtttzi
Vulnerable GPU Memory Management: Towards Recovering Raw Data from GPU
2017
Proceedings on Privacy Enhancing Technologies
Evaluation results also indicate that nearly all GPU-accelerated applications are vulnerable to such attacks, and adversaries can launch attacks without requiring any special privileges both on traditional ...
Our algorithm enables harvesting highly sensitive information including credit card numbers and email contents from GPU memory residues. ...
Defense for virtual machine. To defend against attackers on a virtualized platform, it is sufficient for the hypervisor to clear the whole GPU memory space every time the VM switches. ...
doi:10.1515/popets-2017-0016
dblp:journals/popets/ZhouDLLZL17
fatcat:4263wzw5offmti2domqfpak3vy
Python and Malware: Developing Stealth and Evasive Malware Without Obfuscation
[article]
2021
arXiv
pre-print
With the continuous rise of malicious campaigns and the exploitation of new attack vectors, it is necessary to assess the efficacy of the defensive mechanisms used to detect them. ...
First, it introduces a new method for obfuscating malicious code to bypass all static checks of multi-engine scanners, such as VirusTotal. ...
Responsibility for the information and views expressed therein lies entirely with the authors. ...
arXiv:2105.00565v1
fatcat:5cetfh4ofbbxlgreab5xptmyie
Obfuscation Algorithm for Privacy-Preserving Deep Learning-Based Medical Image Analysis
2022
Applied Sciences
attacks. ...
The proposed algorithm successfully enables DL model training on obfuscated images with no significant computational overhead while ensuring protection against human eye perception and AI-based reconstruction ...
The considered attack strategy is based on the training of a reconstruction model on original-obfuscated pairs of samples from a public dataset. ...
doi:10.3390/app12083997
fatcat:tqsq3r6od5gdrmbm6hj6ml3cn4
A Survey of Techniques for Improving Security of GPUs
[article]
2018
arXiv
pre-print
More than informing users and researchers about GPU security techniques, this survey aims to increase their awareness about GPU security vulnerabilities and potential countermeasures. ...
Due to these, the GPU can act as a safe-haven for stealthy malware and the weakest 'link' in the security 'chain'. ...
, most notably, cryptography, finance, health, space and defense. ...
arXiv:1804.00114v1
fatcat:u3363wls3fh3te2vvfvjzra25q
Software Puzzle Approach: A Measure to Resource-Inflated Denial-of-Service Attack
2017
International Journal of Computer Applications
However, a wrongdoer will inflate its capability of DoS/DDoS attacks with quick puzzle solving package and/or intrinsic graphics process unit (GPU) hardware to considerably weaken the effectiveness of ...
effort in translating a central process unit puzzle package to its functionally equivalent GPU version such that the interpretation can't be drained real time. ...
This parallelism strategy will dramatically scale back the total puzzle-solving time, and hence increase the attack potency. Green et al. ...
doi:10.5120/ijca2017913314
fatcat:ilf4hcxujbgotkbrhbgydylcc4
Software Puzzle: A Countermeasure to Resource-Inflated Denial-of-Service Attacks
2015
IEEE Transactions on Information Forensics and Security
Index Terms-Software puzzle, code obfuscation, GPU programming, distributed denial of service (DDoS). ...
However, an attacker can inflate its capability of DoS/DDoS attacks with fast puzzlesolving software and/or built-in graphics processing unit (GPU) hardware to significantly weaken the effectiveness of ...
In other words, their defense against GPU-inflated DoS attacks may not be attractive in practice. ...
doi:10.1109/tifs.2014.2366293
fatcat:wo3mb46q7ne5nml3ak7ulirqji
Adversarially Robust Classification by Conditional Generative Model Inversion
[article]
2022
arXiv
pre-print
Most adversarial attack defense methods rely on obfuscating gradients. ...
We propose a classification model that does not obfuscate gradients and is robust by construction without assuming prior knowledge about the attack. ...
BPDA was shown to break down defense mechanisms with obfuscated gradients. Other methods which obfuscate gradients and were circumvented by Athalye et al. ...
arXiv:2201.04733v1
fatcat:pgx2b6wqbzfiznggkm636xbz6a
Privacy-preserving Collaborative Learning with Automatic Transformation Search
[article]
2021
arXiv
pre-print
Comprehensive evaluations demonstrate that the policies discovered by our method can defeat existing reconstruction attacks in collaborative learning, with high efficiency and negligible impact on the ...
We adopt two new metrics to quantify the impacts of transformations on data privacy and model usability, which can significantly accelerate the search speed. ...
Existing Defenses and Limitations One straightforward defense strategy is to obfuscate the gradients before releasing them, in order to make the reconstruction difficult or infeasible. ...
arXiv:2011.12505v2
fatcat:memtrkgvazcl5kpqatcrpalrya
« Previous
Showing results 1 — 15 out of 435 results