Improving the affordability of robustness training for DNNs

Sidharth Gupta, Parijat Dube, Ashish Verma
2020 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)  
Projected Gradient Descent (PGD) based adversarial training has become one of the most prominent methods for building robust deep neural network models. However, the computational complexity associated with this approach, due to the maximization of the loss function when finding adversaries, is a longstanding problem and may be prohibitive when using larger and more complex models. In this paper we show that the initial phase of adversarial training is redundant and can be replaced with natural
more » ... training which significantly improves the computational efficiency. We demonstrate that this efficiency gain can be achieved without any loss in accuracy on natural and adversarial test samples. We support our argument with insights on the nature of the adversaries and their relative strength during the training process. We show that our proposed method can reduce the training time by a factor of up to 2.5 with comparable or better model test accuracy and generalization on various strengths of adversarial attacks.
doi:10.1109/cvprw50498.2020.00398 dblp:conf/cvpr/GuptaDV20 fatcat:pkmd3uw7ajbrjgpg7fwkrufy5y