A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Filters
Robust Algorithms under Adversarial Injections
[article]
2020
arXiv
pre-print
We believe that studying algorithms under this much weaker assumption can lead to new insights and, in particular, more robust algorithms. ...
For this reason, we propose a new adversarial injections model, in which the input is ordered randomly, but an adversary may inject misleading elements at arbitrary positions. ...
As discussed earlier, this approach does not work under adversarial-injections. ...
arXiv:2004.12667v1
fatcat:nzmok3bbdjeg3iatdo6jtyvime
Robust Algorithms Under Adversarial Injections
2020
International Colloquium on Automata, Languages and Programming
We believe that studying algorithms under this much weaker assumption can lead to new insights and, in particular, more robust algorithms. ...
For this reason, we propose a new adversarial injections model, in which the input is ordered randomly, but an adversary may inject misleading elements at arbitrary positions. ...
In other words, the competitive ratio of 1/2 is I C A L P 2 0 2 0 56:8
Robust Algorithms Under Adversarial Injections optimal even for bipartite graphs. ...
doi:10.4230/lipics.icalp.2020.56
dblp:conf/icalp/GargKRS20
fatcat:w55jx4pvo5errbr6loetcixbmy
NoiLIn: Do Noisy Labels Always Hurt Adversarial Training?
[article]
2021
arXiv
pre-print
Empirically, NoiLIn answers the previous question negatively – the adversarial robustness can be indeed enhanced by NL injection. ...
To enhance AT's adversarial robustness, we propose "NoiLIn" that gradually increases Noisy Labels Injection over the AT's training process. ...
NoiLIn: An Automatic NL Injection Strategy For benefiting adversarial robustness, we propose a simple NL injection strategy that gradually increases the rate of NL injection (i.e., NoiLIn in Algorithm ...
arXiv:2105.14676v1
fatcat:mtcfp6dlabehbpd43ij4jrxrwu
Adversarial Concept Drift Detection under Poisoning Attacks for Robust Data Stream Mining
[article]
2020
arXiv
pre-print
We also introduce Relative Loss of Robustness - a novel measure for evaluating the performance of concept drift detectors under poisoning attacks. ...
We introduce the taxonomy for two types of adversarial concept drifts, as well as a robust trainable drift detector. ...
Finally, we have introduced Relative Loss of Robustness, a novel measure for evaluating the performance of drift detectors and other streaming algorithms under adversarial concept drift. ...
arXiv:2009.09497v1
fatcat:eybydq3rbbbxjga7nedjglyaze
Learn2Perturb: an End-to-end Feature Perturbation Learning to Improve Adversarial Robustness
[article]
2020
arXiv
pre-print
Alongside other adversarial defense approaches being investigated, there has been a very recent interest in improving adversarial robustness in deep neural networks through the introduction of perturbations ...
In this study, we introduce Learn2Perturb, an end-to-end feature perturbation learning approach for improving the adversarial robustness of deep neural networks. ...
training when training under perturbation injection, and ii) increase network uncertainty through interference-time perturbation injection to make it difficult to learn an adversarial attack. tion with ...
arXiv:2003.01090v2
fatcat:uuo6dagejbeq3hltaefsq4h7ly
Generative Dynamic Patch Attack
[article]
2021
arXiv
pre-print
superior robustness to adversarial patch attacks than competing methods. ...
Existing patch attacks mostly consider injecting adversarial patches at input-agnostic locations: either a predefined location or a random location. ...
Notably, GDPA-AT is the only defense algorithm that achieves almost the highest robustness under all three attacks. ...
arXiv:2111.04266v2
fatcat:hyayth3fszgudkkis6t7ttlt5a
Stealthy Attack on Algorithmic-Protected DNNs via Smart Bit Flipping
[article]
2021
arXiv
pre-print
To improve the robustness of DNNs, some algorithmic-based countermeasures against adversarial examples have been introduced thereafter. ...
inputs but misclassify crafted inputs even with algorithmic countermeasures. ...
under our adversarial attack with a small Aided Design (ICCAD). ...
arXiv:2112.13162v1
fatcat:jf6up5vonrbhvipf2sknpczdpq
Security of Symmetric Primitives under Incorrect Usage of Keys
2017
IACR Transactions on Symmetric Cryptology
We show standard notions (such as AE or PRF security) guarantee a basic level of key-robustness under honestly generated keys, but fail to imply keyrobustness under adversarially generated (or known) keys ...
We study the security of symmetric primitives under the incorrect usage of keys. Roughly speaking, a key-robust scheme does not output ciphertexts/tags that are valid with respect to distinct keys. ...
Roşie was supported by European Union's Horizon 2020 research and innovation programme under grant agreement No H2020-MSCA-ITN-2014-643161 ECRYPT-NET. ...
doi:10.13154/tosc.v2017.i1.449-473
dblp:journals/tosc/FarshimOR17
fatcat:instduhoojfrdjga6tmxdjsyky
Security of Symmetric Primitives under Incorrect Usage of Keys
2017
IACR Transactions on Symmetric Cryptology
We show standard notions (such as AE or PRF security) guarantee a basic level of key-robustness under honestly generated keys, but fail to imply keyrobustness under adversarially generated (or known) keys ...
We study the security of symmetric primitives under the incorrect usage of keys. Roughly speaking, a key-robust scheme does not output ciphertexts/tags that are valid with respect to distinct keys. ...
Roşie was supported by European Union's Horizon 2020 research and innovation programme under grant agreement No H2020-MSCA-ITN-2014-643161 ECRYPT-NET. ...
doi:10.46586/tosc.v2017.i1.449-473
fatcat:d7vmwtelnngtnbao33znw3wo3e
Improving Robustness of Reinforcement Learning for Power System Control with Adversarial Training
[article]
2021
arXiv
pre-print
We then propose to use adversarial training to increase the robustness of RL agent against attacks and avoid infeasible operational decisions. ...
a Power Network (L2RPN) challenge, under both white-box and black-box attack settings. ...
Each adversary is allowed to inject attacks every k = 50 steps (adversaries cannot immediately attack). ...
arXiv:2110.08956v2
fatcat:hb2y5ziju5clxi2o2lvdu224pi
Rethinking Image-Scaling Attacks: The Interplay Between Vulnerabilities in Machine Learning Systems
[article]
2021
arXiv
pre-print
In this system, the model and the scaling algorithm have become attractive targets for numerous attacks, such as adversarial examples and the recent image-scaling attack. ...
Based on this scaling-aware attack, we reveal that most existing scaling defenses are ineffective under threat from downstream models. ...
This assumption does not hold if the adversary injects adversarial perturbations.
Robust Prevention Defenses Quiring et al. ...
arXiv:2104.08690v2
fatcat:phtpys5375e75mvybs3lsj2i4u
Adversarial Network Traffic: Towards Evaluating the Robustness of Deep Learning-Based Network Traffic Classification
[article]
2021
arXiv
pre-print
In this paper, we evaluate the robustness of DL-based network traffic classifiers against Adversarial Network Traffic (ANT). ...
AdvPay attack injects a UAP into the payload of a dummy packet to evaluate the robustness of flow content classifiers. ...
AdvPad injects a UAP into the end or the start of packets payload to evaluate the robustness of packet classifiers. ...
arXiv:2003.01261v4
fatcat:mjgkttsjqndjvjnfstjlufcszy
Theoretical evidence for adversarial robustness through randomization
[article]
2019
arXiv
pre-print
This paper investigates the theory of robustness against adversarial attacks. It focuses on the family of randomization techniques that consist in injecting noise in the network at inference time. ...
The first one relates the randomization rate to robustness to adversarial attacks. ...
A general definition of robustness to adversarial attacks As we will inject noise in our algorithm in order to defend against adversarial attacks, we need to introduce the notion of "probabilistic mapping ...
arXiv:1902.01148v2
fatcat:cwmwyxjsorerzfc6tmwualqb54
Adversarial Attacks and Defenses on Graphs: A Review, A Tool and Empirical Studies
[article]
2020
arXiv
pre-print
Furthermore, we have developed a repository with representative algorithms (https://github.com/DSE-MSU/DeepRobust/tree/master/deeprobust/graph). ...
Adversary can mislead GNNs to give wrong predictions by modifying the graph structure such as manipulating a few edges. ...
Acknowledgments This research is supported by the National Science Foundation (NSF) under grant number CNS1815636, IIS1928278, IIS1714741, IIS1845081, IIS1907704, and IIS1955285. ...
arXiv:2003.00653v3
fatcat:q26p26cvezfelgjtksmi3fxrtm
ANTIDOTE
2009
Proceedings of the 9th ACM SIGCOMM conference on Internet measurement conference - IMC '09
This process invites adversaries to manipulate the training data so that the learned model fails to detect subsequent attacks. ...
To combat these poisoning activities, we propose an antidote based on techniques from robust statistics and present a new robust PCA-based detector. ...
In previous poisoning schemes the adversary can only inject chaff along their compromised link, whereas in this scenario, the adversary can inject chaff on any link. ...
doi:10.1145/1644893.1644895
dblp:conf/imc/RubinsteinNHJLRTT09
fatcat:zdnrwkuobjej3e6z5wc3vm656q
« Previous
Showing results 1 — 15 out of 10,384 results