A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2022; you can also visit the original URL.
The file type is application/pdf
.
CATE: A Contrastive Pre-trained Model for Metaphor Detection with Semi-supervised Learning
2021
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
unpublished
Metaphors are ubiquitous in natural language, and detecting them requires contextual reasoning about whether a semantic incongruence actually exists. Most existing work addresses this problem using pre-trained contextualized models. Despite their success, these models require a large amount of labeled data and are not linguistically-based. In this paper, we proposed a ContrAstive pre-Trained modEl (CATE) for metaphor detection with semi-supervised learning. Our model first uses a pre-trained
doi:10.18653/v1/2021.emnlp-main.316
fatcat:wpk77fjtn5hahnb62strnqw2ee