A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2022; you can also visit the original URL.
The file type is
Deep neural networks could be fooled by adversarial examples with trivial differences to original samples. To keep the difference imperceptible in human eyes, researchers bound the adversarial perturbations by the ℓ_∞ norm, which is now commonly served as the standard to align the strength of different attacks for a fair comparison. However, we propose that using the ℓ_∞ norm alone is not sufficient in measuring the attack strength, because even with a fixed ℓ_∞ distance, the ℓ_2 distance alsoarXiv:2102.10343v3 fatcat:tufkklbbmveejikif46mtfq2by