A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Adversarial Attacks on Deep Learning Models in Natural Language Processing: A Survey
[article]
2019
arXiv
pre-print
With the development of high computational devices, deep neural networks (DNNs), in recent years, have gained significant popularity in many Artificial Intelligence (AI) applications. However, previous efforts have shown that DNNs were vulnerable to strategically modified samples, named adversarial examples. These samples are generated with some imperceptible perturbations but can fool the DNNs to give false predictions. Inspired by the popularity of generating adversarial examples for image
arXiv:1901.06796v3
fatcat:gfh4gzkvn5djpdkn7k63xlqahm