A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Filters
Adv-BERT: BERT is not robust on misspellings! Generating nature adversarial samples on BERT
[article]
2020
arXiv
pre-print
It is unclear, however, how the models will perform in realistic scenarios where natural rather than malicious adversarial instances often exist. ...
The typos in informative words make severer damages; (ii) Mistype is the most damaging factor, compared with inserting, deleting, etc.; (iii) Humans and machines have different focuses on recognizing adversarial ...
In this case, if an adversary wants to attack BERT intentionally, the best strategy is adaptively mixing up "max-grad" and "random" policy for adversarial sample generation. ...
arXiv:2003.04985v1
fatcat:uqs4k4nyarcipol6dxoysf3xgy
A little goes a long way: Improving toxic language classification despite data scarcity
[article]
2020
arXiv
pre-print
The efficacy of data augmentation on toxic language classification has not been fully explored. ...
We show that while BERT performed the best, shallow classifiers performed comparably when trained on data augmented with a combination of three techniques, including GPT-2-generated sentences. ...
Adv-BERT: BERT is not robust on misspellings!
Generating nature adversarial samples on BERT.
arXiv preprint arXiv:2003.04985.
Liling Tan. 2014. ...
arXiv:2009.12344v2
fatcat:6uwcp2o5efgrhh5uc3e725loym
Key Point Matching with Transformers
2021
Proceedings of the 8th Workshop on Argument Mining
unpublished
Adv-bert: Bert is not robust on misspellings! ...
gen-
Roy Bar-Haim, Lilach Eden, Roni Friedman, Yoav erating nature adversarial samples on bert. arXiv
Kantor, Dan Lahav, and Noam Slonim. 2020a. ...
doi:10.18653/v1/2021.argmining-1.20
fatcat:zh5sk4c5kzfxhey7b3rhktc65u
A little goes a long way: Improving toxic language classification despite data scarcity
2020
Findings of the Association for Computational Linguistics: EMNLP 2020
unpublished
Adv-BERT: BERT is not robust on misspellings!
Generating nature adversarial samples on BERT.
arXiv preprint arXiv:2003.04985.
Liling Tan. 2014. ...
Park et al. (2019) found that BERT may perform poorly on out-of-domain samples. BERT is reportedly unstable on adversarially chosen subword substitutions (Sun et al., 2020) . ...
The selected sample is shorter than average (see §3.1, Table 1 ). We anonymized the username in ADD (#3.). Three samples generated by each technique shown. ...
doi:10.18653/v1/2020.findings-emnlp.269
fatcat:gksjpe4ch5az3p3vs6knkoor2m