A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2017; you can also visit the original URL.
The file type is application/pdf
.
Retrieval system evaluation
2010
Proceeding of the 33rd international ACM SIGIR conference on Research and development in information retrieval - SIGIR '10
In information retrieval (IR), research aiming to reduce the cost of retrieval system evaluations has been conducted along two lines: (i) the evaluation of IR systems with reduced (i.e. incomplete) amounts of manual relevance assessments, and (ii) the fully automatic evaluation of IR systems, thus foregoing the need for manual assessments altogether. The proposed methods in both areas are commonly evaluated by comparing their performance estimates for a set of systems to a ground truth
doi:10.1145/1835449.1835654
dblp:conf/sigir/HauffJ10
fatcat:qv22ch35nfbzbdeg752rw4rlf4