PGSG at SemEval-2020 Task 12: BERT-LSTM with Tweets' Pretrained Model and Noisy Student Training Method

Bao-Tran Pham-Hong, Setu Chokshi
2020 Proceedings of the Fourteenth Workshop on Semantic Evaluation   unpublished
The paper presents a system developed for the SemEval-2020 competition Task 12 (OffensEval-2): Multilingual Offensive Language Identification in Social Media. We achieve the second place (2nd) in sub-task B: Automatic categorization of offense types and are ranked 55th with a macro F1-score of 90.59 in sub-task A: Offensive language identification. Our solution is using a stack of BERT and LSTM layers, training with the Noisy Student method. Since the tweets data contains a large number of
more » ... words and slang, we update the vocabulary of the BERT large model pre-trained by the Google AI Language team. We fine-tune the model with tweet sentences provided in the challenge.
doi:10.18653/v1/2020.semeval-1.280 fatcat:fx2gvvldcvbi5i2igzhymp6kxy