Rethink the Evaluation for Attack Strength of Backdoor Attacks in Natural Language Processing [article]

Lingfeng Shen, Haiyun Jiang, Lemao Liu, Shuming Shi
2022 arXiv   pre-print
It has been shown that natural language processing (NLP) models are vulnerable to a kind of security threat called the Backdoor Attack, which utilizes a 'backdoor trigger' paradigm to mislead the models. The most threatening backdoor attack is the stealthy backdoor, which defines the triggers as text style or syntactic. Although they have achieved an incredible high attack success rate (ASR), we find that the principal factor contributing to their ASR is not the 'backdoor trigger' paradigm.
more » ... the capacity of these stealthy backdoor attacks is overestimated when categorized as backdoor attacks. Therefore, to evaluate the real attack power of backdoor attacks, we propose a new metric called attack successful rate difference (ASRD), which measures the ASR difference between clean state and poison state models. Besides, since the defenses against stealthy backdoor attacks are absent, we propose Trigger Breaker, consisting of two too simple tricks that can defend against stealthy backdoor attacks effectively. Experiments show that our method achieves significantly better performance than state-of-the-art defense methods against stealthy backdoor attacks.
arXiv:2201.02993v2 fatcat:jq46uakjnffvhgbcxh5abxuprq