Overview of CheckThat! 2020 English: Automatic Identification and Verification of Claims in Social Media

Shaden Shaar, Alex Nikolov, Nikolay Babulkov, Firoj Alam, Alberto Barrón-Cedeño, Tamer Elsayed, Maram Hasanain, Reem Suwaileh, Fatima Haouari, Giovanni Da San Martino, Preslav Nakov
2020 Conference and Labs of the Evaluation Forum  
We present an overview of the third edition of the Check-That! Lab at CLEF 2020. The lab featured five tasks in Arabic and English, and here we focus on the three English tasks. Task 1 challenged the participants to predict which tweets from a stream of tweets about COVID-19 are worth fact-checking. Task 2 asked to retrieve verified claims from a set of previously fact-checked claims, which could help fact-check the claims made in an input tweet. Task 5 asked to propose which claims in a
more » ... al debate or a speech should be prioritized for fact-checking. A total of 18 teams participated in the English tasks, and most submissions managed to achieve sizable improvements over the baselines using models based on BERT, LSTMs, and CNNs. In this paper, we describe the process of data collection and the task setup, including the evaluation measures used, and we give a brief overview of the participating systems. Last but not least, we release to the research community all datasets from the lab as well as the evaluation scripts, which should enable further research in the important tasks of check-worthiness estimation and detecting previously fact-checked claims.
dblp:conf/clef/ShaarNBABEHSHMN20 fatcat:ouilyth4czerzg35grm4sm6gjy