A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2022; you can also visit the original URL.
The file type is application/pdf
.
Improving the affordability of robustness training for DNNs
2020
2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)
Projected Gradient Descent (PGD) based adversarial training has become one of the most prominent methods for building robust deep neural network models. However, the computational complexity associated with this approach, due to the maximization of the loss function when finding adversaries, is a longstanding problem and may be prohibitive when using larger and more complex models. In this paper we show that the initial phase of adversarial training is redundant and can be replaced with natural
doi:10.1109/cvprw50498.2020.00398
dblp:conf/cvpr/GuptaDV20
fatcat:pkmd3uw7ajbrjgpg7fwkrufy5y