CATE: A Contrastive Pre-trained Model for Metaphor Detection with Semi-supervised Learning

Zhenxi Lin, Qianli Ma, Jiangyue Yan, Jieyu Chen
2021 Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing   unpublished
Metaphors are ubiquitous in natural language, and detecting them requires contextual reasoning about whether a semantic incongruence actually exists. Most existing work addresses this problem using pre-trained contextualized models. Despite their success, these models require a large amount of labeled data and are not linguistically-based. In this paper, we proposed a ContrAstive pre-Trained modEl (CATE) for metaphor detection with semi-supervised learning. Our model first uses a pre-trained
more » ... el to obtain a contextual representation of target words and employs a contrastive objective to promote an increased distance between target words' literal and metaphorical senses based on linguistic theories. Furthermore, we propose a simple strategy to collect large-scale candidate instances from the general corpus and generalize the model via self-training. Extensive experiments show that CATE achieves better performance against state-of-the-art baselines on several benchmark datasets.
doi:10.18653/v1/2021.emnlp-main.316 fatcat:wpk77fjtn5hahnb62strnqw2ee