Filters








1 Hit in 1.9 sec

BadNL: Backdoor Attacks Against NLP Models [article]

Xiaoyi Chen, Ahmed Salem, Michael Backes, Shiqing Ma, Yang Zhang
2020 arXiv   pre-print
In this paper, we present the first systematic investigation of the backdoor attack against models designed for natural language processing (NLP) tasks.  ...  For instance, using the word-level triggers, our backdoor attack achieves 100% backdoor accuracy with only a drop of 0.18%, 1.26%, and 0.19% in the models utility, for the IMDB, Amazon, and Stanford Sentiment  ...  Conclusion In this work, we explore the backdoor attacks against NLP models.  ... 
arXiv:2006.01043v1 fatcat:a627azfbfzam5ck4sx6gfyye34