A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is
Deep neural networks (DNNs) have achieved great success in various applications due to their strong expressive power. However, recent studies have shown that DNNs are vulnerable to adversarial examples which are manipulated instances targeting to mislead DNNs to make incorrect predictions. Currently, most such adversarial examples try to guarantee "subtle perturbation" by limiting the L_p norm of the perturbation. In this paper, we aim to explore the impact of semantic manipulation on DNNsarXiv:1906.07927v4 fatcat:tyduj5qtsjhcbiyeqhybb3xhmm