DLJUST at SemEval-2021 Task 7: Hahackathon: Linking Humor and Offense

Hani Al-Omari, Isra'a AbedulNabi, Rehab Duwairi
2021 Proceedings of the 15th International Workshop on Semantic Evaluation (SemEval-2021)   unpublished
Humor detection and rating poses interesting linguistic challenges to NLP; it is highly subjective depending on the perceptions of a joke and the context in which it is used. This paper utilizes and compares transformers models; BERT base and Large, BERTweet, RoBERTa base and Large, and RoBERTa base irony, for detecting and rating humor and offense. The proposed models, where given a text in cased and uncased type obtained from SemEval-2021 Task7: HaHackathon: Linking Humor and Offense Across
more » ... fferent Age Groups. The highest scored model for the first subtask: Humor Detection, is BERTweet base cased model with 0.9540 F1-score, for the second subtask: Average Humor Rating Score, it is BERT Large cased with the minimum RMSE of 0.5555, for the fourth subtask: Average Offensiveness Rating Score, it is BERTweet base cased model with minimum RMSE of 0.4822.
doi:10.18653/v1/2021.semeval-1.155 fatcat:r7vgelmbrvfilde2smrshkjklq