Generating Fluent Adversarial Examples for Natural Languages [article]

Huangzhao Zhang, Hao Zhou, Ning Miao, Lei Li
2020 arXiv   pre-print
Efficiently building an adversarial attacker for natural language processing (NLP) tasks is a real challenge. Firstly, as the sentence space is discrete, it is difficult to make small perturbations along the direction of gradients. Secondly, the fluency of the generated examples cannot be guaranteed. In this paper, we propose MHA, which addresses both problems by performing Metropolis-Hastings sampling, whose proposal is designed with the guidance of gradients. Experiments on IMDB and SNLI show
more » ... that our proposed MHA outperforms the baseline model on attacking capability. Adversarial training with MAH also leads to better robustness and performance.
arXiv:2007.06174v1 fatcat:kvl555rxpfd7np62fid3bqyuqe