A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Provably Robust Deep Learning via Adversarially Trained Smoothed Classifiers
[article]
2020
arXiv
pre-print
Recent works have shown the effectiveness of randomized smoothing as a scalable technique for building neural network-based classifiers that are provably robust to ℓ_2-norm adversarial perturbations. In this paper, we employ adversarial training to improve the performance of randomized smoothing. We design an adapted attack for smoothed classifiers, and we show how this attack can be used in an adversarial training setting to boost the provable robustness of smoothed classifiers. We demonstrate
arXiv:1906.04584v5
fatcat:whnwu6rcqvf33fuxolghsqsf3u