A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2021; you can also visit the original URL.
The file type is application/pdf
.
DLJUST at SemEval-2021 Task 7: Hahackathon: Linking Humor and Offense
2021
Proceedings of the 15th International Workshop on Semantic Evaluation (SemEval-2021)
unpublished
Humor detection and rating poses interesting linguistic challenges to NLP; it is highly subjective depending on the perceptions of a joke and the context in which it is used. This paper utilizes and compares transformers models; BERT base and Large, BERTweet, RoBERTa base and Large, and RoBERTa base irony, for detecting and rating humor and offense. The proposed models, where given a text in cased and uncased type obtained from SemEval-2021 Task7: HaHackathon: Linking Humor and Offense Across
doi:10.18653/v1/2021.semeval-1.155
fatcat:r7vgelmbrvfilde2smrshkjklq