Generating Black-Box Adversarial Examples for Text Classifiers Using a Deep Reinforced Model [article]

Prashanth Vijayaraghavan, Deb Roy
2019 arXiv   pre-print
Recently, generating adversarial examples has become an important means of measuring robustness of a deep learning model. Adversarial examples help us identify the susceptibilities of the model and further counter those vulnerabilities by applying adversarial training techniques. In natural language domain, small perturbations in the form of misspellings or paraphrases can drastically change the semantics of the text. We propose a reinforcement learning based approach towards generating
more » ... ial examples in black-box settings. We demonstrate that our method is able to fool well-trained models for (a) IMDB sentiment classification task and (b) AG's news corpus news categorization task with significantly high success rates. We find that the adversarial examples generated are semantics-preserving perturbations to the original text.
arXiv:1909.07873v1 fatcat:lug35v7xd5d27k5tjy6js6cq5a