Input Validation for Neural Networks via Runtime Local Robustness Verification [article]

Jiangchao Liu, Liqian Chen, Antoine Mine, Ji Wang
2020 arXiv   pre-print
Local robustness verification can verify that a neural network is robust wrt. any perturbation to a specific input within a certain distance. We call this distance Robustness Radius. We observe that the robustness radii of correctly classified inputs are much larger than that of misclassified inputs which include adversarial examples, especially those from strong adversarial attacks. Another observation is that the robustness radii of correctly classified inputs often follow a normal
more » ... n. Based on these two observations, we propose to validate inputs for neural networks via runtime local robustness verification. Experiments show that our approach can protect neural networks from adversarial examples and improve their accuracies.
arXiv:2002.03339v1 fatcat:hxogvcxshba7pax7s2tcigovcm