A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2022; you can also visit the original URL.
The file type is
ESANN 2022 proceedings
Deep neural networks perform well in many visual recognition tasks, but they are sensitive to adversarial input perturbation. More robust models can be learned when attacks are applied to the training data or preprocessing is used. However, the effect of preprocessing is frequently underestimated and it has not received sufficient attention as it usually does not affect the network's clean accuracy. Here, we seek to demonstrate that preprocessing can play a role in improving adversarialdoi:10.14428/esann/2022.es2022-96 fatcat:gjompzb6z5fcxb5zwreztlzceu