A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2021; you can also visit the original URL.
The file type is application/pdf
.
CLINE: Contrastive Learning with Semantic Negative Examples for Natural Language Understanding
[article]
2021
arXiv
pre-print
Despite pre-trained language models have proven useful for learning high-quality semantic representations, these models are still vulnerable to simple perturbations. Recent works aimed to improve the robustness of pre-trained models mainly focus on adversarial training from perturbed examples with similar semantics, neglecting the utilization of different or even opposite semantics. Different from the image processing field, the text is discrete and few word substitutions can cause significant
arXiv:2107.00440v1
fatcat:6q4tik5o5vg2xhphvcky5gg3ka