A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2022; you can also visit the original URL.
The file type is application/pdf
.
Improving Social Meaning Detection with Pragmatic Masking and Surrogate Fine-Tuning
2022
Proceedings of the 12th Workshop on Computational Approaches to Subjectivity, Sentiment & Social Media Analysis
unpublished
Masked language models (MLMs) are pretrained with a denoising objective that is in a mismatch with the objective of downstream fine-tuning. We propose pragmatic masking and surrogate fine-tuning as two complementing strategies that exploit social cues to drive pre-trained representations toward a broad set of concepts useful for a wide class of social meaning tasks. We test our models on 15 different Twitter datasets for social meaning detection. Our methods achieve 2.34% F 1 over a competitive
doi:10.18653/v1/2022.wassa-1.14
fatcat:dvedadfuefharghgwaprlvuchu