Evaluating machine learning for information extraction

Neil Ireson, Fabio Ciravegna, Mary Elaine Califf, Dayne Freitag, Nicholas Kushmerick, Alberto Lavelli
2005 Proceedings of the 22nd international conference on Machine learning - ICML '05  
Comparative evaluation of Machine Learning (ML) systems used for Information Extraction (IE) has suffered from various inconsistencies in experimental procedures. This paper reports on the results of the Pascal Challenge on Evaluating Machine Learning for Information Extraction, which provides a standardised corpus, set of tasks, and evaluation methodology. The challenge is described and the systems submitted by the ten participants are briefly introduced and their performance is analysed.
doi:10.1145/1102351.1102395 dblp:conf/icml/IresonCCFKL05 fatcat:b763xnjt3fentihsu53e2bbw6i