A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2022; you can also visit the original URL.
The file type is application/pdf
.
Using neural topic models to track context shifts of words: a case study of COVID-related terms before and after the lockdown in April 2020
2022
Proceedings of the 3rd Workshop on Computational Approaches to Historical Language Change
unpublished
This paper explores lexical meaning changes in a new dataset, which includes tweets from before and after the COVID-related lockdown in April 2020. We use this dataset to evaluate traditional and more recent unsupervised approaches to lexical semantic change that make use of contextualized word representations based on the BERT neural language model to obtain representations of word usages. We argue that previous models that encode local representations of words cannot capture global context
doi:10.18653/v1/2022.lchange-1.14
fatcat:yx4fj5sl3zaulkj36vdxuyv5fu