2 Hits in 3.4 sec

WITCHcraft: Efficient PGD attacks with random step size [article]

Ping-Yeh Chiang, Jonas Geiping, Micah Goldblum, Tom Goldstein, Renkun Ni, Steven Reich, Ali Shafahi
2019 arXiv   pre-print
We propose a variant of Projected Gradient Descent (PGD) that uses a random step size to improve performance without resorting to expensive random restarts.  ...  Iterative FGSM-based methods without restarts trade off performance for computational efficiency because they do not adequately explore the image space and are highly sensitive to the choice of step size  ...  Sensitivity plot of a 40-step PGD attack compared with 40-step WITCHcraft for the CIFAR-10 challenge.  ... 
arXiv:1911.07989v1 fatcat:fdwhikmyynabjg6ydjn7jmqqfe

Bit Error Robustness for Energy-Efficient DNN Accelerators [article]

David Stutz, Nandhini Chandramoorthy, Matthias Hein, Bernt Schiele
2021 arXiv   pre-print
In this paper, we show that a combination of robust fixed-point quantization, weight clipping, and random bit error training (RandBET) improves robustness against random bit errors in (quantized) DNN weights  ...  Witchcraft: Efficient PGD attacks with random step size., abs/1911.07989, 2019. Chiu, C., Mehrotra, K., Mohan, C. K., and Ranka, S.  ...  Fixed-point optimization of deep neural networks with adaptive step size retraining. In ICASSP, 2017. Simonyan, K. and Zisserman, A.  ... 
arXiv:2006.13977v3 fatcat:6rtdwmpo3neapmrlotk7snoliq