Reducing Adversarial Example Transferability Using Gradient Regularization [article]

George Adam, Petr Smirnov, Benjamin Haibe-Kains, Anna Goldenberg
2019 arXiv   pre-print
Deep learning algorithms have increasingly been shown to lack robustness to simple adversarial examples (AdvX). An equally troubling observation is that these adversarial examples transfer between different architectures trained on different datasets. We investigate the transferability of adversarial examples between models using the angle between the input-output Jacobians of different models. To demonstrate the relevance of this approach, we perform case studies that involve jointly training
more » ... airs of models. These case studies empirically justify the theoretical intuitions for why the angle between gradients is a fundamental quantity in AdvX transferability. Furthermore, we consider the asymmetry of AdvX transferability between two models of the same architecture and explain it in terms of differences in gradient norms between the models. Lastly, we provide a simple modification to existing training setups that reduces transferability of adversarial examples between pairs of models.
arXiv:1904.07980v1 fatcat:ezgdgaaerjhmpghzgbbymd4dou