A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Effects of Loss Functions And Target Representations on Adversarial Robustness
[article]
2020
arXiv
pre-print
Understanding and evaluating the robustness of neural networks under adversarial settings is a subject of growing interest. Attacks proposed in the literature usually work with models trained to minimize cross-entropy loss and output softmax probabilities. In this work, we present interesting experimental results that suggest the importance of considering other loss functions and target representations, specifically, (1) training on mean-squared error and (2) representing targets as codewords
arXiv:1812.00181v3
fatcat:g3m3qzecz5cm5on4judfalkdta