A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Filters
WITCHcraft: Efficient PGD attacks with random step size
[article]
2019
arXiv
pre-print
We propose a variant of Projected Gradient Descent (PGD) that uses a random step size to improve performance without resorting to expensive random restarts. ...
Iterative FGSM-based methods without restarts trade off performance for computational efficiency because they do not adequately explore the image space and are highly sensitive to the choice of step size ...
Sensitivity plot of a 40-step PGD attack compared with 40-step WITCHcraft for the CIFAR-10 challenge. ...
arXiv:1911.07989v1
fatcat:fdwhikmyynabjg6ydjn7jmqqfe
Bit Error Robustness for Energy-Efficient DNN Accelerators
[article]
2021
arXiv
pre-print
In this paper, we show that a combination of robust fixed-point quantization, weight clipping, and random bit error training (RandBET) improves robustness against random bit errors in (quantized) DNN weights ...
Witchcraft: Efficient PGD
attacks with random step size. arXiv.org, abs/1911.07989,
2019.
Chiu, C., Mehrotra, K., Mohan, C. K., and Ranka, S. ...
Fixed-point optimization of deep neural networks with adaptive step size retraining. In ICASSP, 2017. Simonyan, K. and Zisserman, A. ...
arXiv:2006.13977v3
fatcat:6rtdwmpo3neapmrlotk7snoliq