UI at SemEval-2020 Task 4: Commonsense Validation and Explanation by Exploiting Contradiction

Kerenza Doxolodeo, Rahmad Mahendra
2020 Proceedings of the Fourteenth Workshop on Semantic Evaluation   unpublished
This paper describes our submissions into the ComVe challenge, the SemEval 2020 Task 4. This evaluation task consists of three sub-tasks that test commonsense comprehension by identifying sentences that do not make sense and explain why they do not. In subtask A, we use Roberta to find which sentence does not make sense. In subtask B, besides using BERT, we also experiment with replacing the dataset with MNLI when selecting the best explanation from the provided options why the given sentence
more » ... es not make sense. In subtask C, we utilize the MNLI model from subtask B to evaluate the explanation generated by Roberta and GPT-2 by exploiting the contradiction of the sentence and their explanation. Our system submission records a performance of 88.2%, 80.5%, and BLEU 5.5 for those three subtasks, respectively. Task Description SemEval 2020 Task 4: Commonsense Validation and Explanation (ComVe) challenges three subtasks: identifying which member of a pair of a similar sentence does not make sense, selecting the best explanation why a sentence does not make sense, and generate an explanation from scratch why a sentence does not make sense. The dataset for ComVe task consists of 10K instances in train set and 1K instances in test set This work is licensed under a Creative Commons Attribution 4.0 International License. License details: http:// creativecommons.org/licenses/by/4.0/.
doi:10.18653/v1/2020.semeval-1.78 fatcat:lrtoibhzbjeiteeymiepech4xe