Smash at SemEval-2020 Task 7: Optimizing the Hyperparameters of ERNIE 2.0 for Humor Ranking and Rating

J. A. Meaney, Steven Wilson, Walid Magdy
2020 Proceedings of the Fourteenth Workshop on Semantic Evaluation   unpublished
The use of pre-trained language models such as BERT and ULMFiT has become increasingly popular in shared tasks, due to their powerful language modelling capabilities. Our entry to SemEval uses ERNIE 2.0, a language model which is pre-trained on a large number of tasks to enrich the semantic and syntactic information learned. ERNIE's knowledge masking pretraining task is a unique method for learning about named entities, and we hypothesise that it may be of use in a dataset which is built on
more » ... headlines and which contains many named entities. We optimize the hyperparameters in a regression and classification model and find that the hyperparameters we selected helped to make bigger gains in the classification model than the regression model.
doi:10.18653/v1/2020.semeval-1.137 fatcat:7rwmriceknh3pogcvhs5znh2ey