A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2021; you can also visit the original URL.
The file type is application/pdf
.
Smash at SemEval-2020 Task 7: Optimizing the Hyperparameters of ERNIE 2.0 for Humor Ranking and Rating
2020
Proceedings of the Fourteenth Workshop on Semantic Evaluation
unpublished
The use of pre-trained language models such as BERT and ULMFiT has become increasingly popular in shared tasks, due to their powerful language modelling capabilities. Our entry to SemEval uses ERNIE 2.0, a language model which is pre-trained on a large number of tasks to enrich the semantic and syntactic information learned. ERNIE's knowledge masking pretraining task is a unique method for learning about named entities, and we hypothesise that it may be of use in a dataset which is built on
doi:10.18653/v1/2020.semeval-1.137
fatcat:7rwmriceknh3pogcvhs5znh2ey