Evaluation Challenges and Directions for Information-Seeking Support Systems

D. Kelly, S. Dumais, J.O. Pedersen
2009 Computer  
In the area of information retrieval (IR), evaluation has a long history that can be traced back to the automatic indexing studies pioneered by librarian and computer scientist Cyril Cleverdon at Cranfield University in the 1960s. 1 The basic IR evaluation model has been extended by efforts associated with the Text Retrieval Conference (TREC), an annual meeting cosponsored by the National Institute of Standards and Technology and the US Department of Defense that began in 1992. 2 Basic ir model
more » ... In the basic IR evaluation model, researchers share test collections that contain a corpus, queries, and relevance assessments that indicate which documents are relevant to which queries. Because reseachers share common resources and guidelines for conducting system evaluations, it is possible to compare search systems and improve search algorithms. Particular evaluation measures indicate how well a search algorithm performs with respect to the number of relevant documents retrieved along with the position of these documents within a ranked list. Common measures Published by the IEEE Computer Society 0018-9162/09/$25.00 © 2009 IEEE references 1.
doi:10.1109/mc.2009.82 fatcat:nfsg722l7bfbbokirote7m2qyu