A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2014; you can also visit the original URL.
The file type is application/pdf
.
Weighted Krippendorff's alpha is a more reliable metrics for multi-coders ordinal annotations: experimental studies on emotion, opinion and coreference annotation
2014
Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics
The question of data reliability is of first importance to assess the quality of manually annotated corpora. Although Cohen ' s κ is the prevailing reliability measure used in NLP, alternative statistics have been proposed. This paper presents an experimental study with four measures (Cohen's κ, Scott's π, binary and weighted Krippendorff ' s α) on three tasks: emotion, opinion and coreference annotation. The reported studies investigate the factors of influence (annotator bias, category
doi:10.3115/v1/e14-1058
dblp:conf/eacl/AntoineVL14
fatcat:jkolwvkbz5cehcvvwsgtqbbyzy