A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2022; you can also visit the original URL.
The file type is application/pdf
.
BenchIE: A Framework for Multi-Faceted Fact-Based Open Information Extraction Evaluation
[article]
2022
arXiv
pre-print
Intrinsic evaluations of OIE systems are carried out either manually -- with human evaluators judging the correctness of extractions -- or automatically, on standardized benchmarks. The latter, while much more cost-effective, is less reliable, primarily because of the incompleteness of the existing OIE benchmarks: the ground truth extractions do not include all acceptable variants of the same fact, leading to unreliable assessment of the models' performance. Moreover, the existing OIE
arXiv:2109.06850v2
fatcat:bjm65itajrctzl4vhzzvqgiahe