A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Playing it Safe: Adversarial Robustness with an Abstain Option
[article]
2019
arXiv
pre-print
We explore adversarial robustness in the setting in which it is acceptable for a classifier to abstain---that is, output no class---on adversarial examples. Adversarial examples are small perturbations of normal inputs to a classifier that cause the classifier to give incorrect output; they present security and safety challenges for machine learning systems. In many safety-critical applications, it is less costly for a classifier to abstain on adversarial examples than to give incorrect output
arXiv:1911.11253v1
fatcat:4b3gxaebh5huvbx26qo5tacmwe