A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2022; you can also visit the original URL.
The file type is application/pdf
.
Filters
Can Adversarially Robust Learning LeverageComputational Hardness?
2019
International Conference on Algorithmic Learning Theory
A natural question for these settings is whether or not we can make classifiers computationally robust to polynomial-time attacks. ...
Making learners robust to adversarial perturbation at test time (i.e., evasion attacks finding adversarial examples) or training time (i.e., data poisoning attacks) has emerged as a challenging task. ...
computational hardness. ...
dblp:conf/alt/MahloujifarM19
fatcat:gjif3ko5zrcifgeykl3gcm2ibm
The curse of overparametrization in adversarial training: Precise analysis of robust generalization for random features regression
[article]
2022
arXiv
pre-print
Our developed theory reveals the nontrivial effect of overparametrization on robustness and indicates that for adversarially trained random features models, high overparametrization can hurt robust generalization ...
Despite the remarkable success of deep learning architectures in the overparametrized regime, it is also well known that these models are highly vulnerable to small adversarial perturbations in their inputs ...
Hassani is supported by the NSF CAREER award, AFOSR YIP, the Intel Rising Star award, as well as the AI Institute for Learning-Enabled Optimization at Scale (TILOS). A. ...
arXiv:2201.05149v1
fatcat:exxtlpgfqvajdgf4kxdc5jtrtm