A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2019; you can also visit the original URL.
The file type is application/pdf
.
Deep Text Classification Can be Fooled
2018
Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence
In this paper, we present an effective method to craft text adversarial samples, revealing one important yet underestimated fact that DNN-based text classifiers are also prone to adversarial sample attack. Specifically, confronted with different adversarial scenarios, the text items that are important for classification are identified by computing the cost gradients of the input (white-box attack) or generating a series of occluded test samples (black-box attack). Based on these items, we
doi:10.24963/ijcai.2018/585
dblp:conf/ijcai/0002LSBLS18
fatcat:tw6xx55rkrgldmhwvodygkuvye