A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2019; you can also visit the original URL.
The file type is application/pdf
.
Scaling IR-system evaluation using term relevance sets
2004
Proceedings of the 27th annual international conference on Research and development in information retrieval - SIGIR '04
This paper describes an evaluation method based on Term Relevance Sets (Trels) that measures an IR system's quality by examining the content of the retrieved results rather than by looking for pre-specified relevant pages. Trels consist of a list of terms believed to be relevant for a particular query as well as a list of irrelevant terms. The proposed method does not involve any document relevance judgments, and as such is not adversely affected by changes to the underlying collection.
doi:10.1145/1008992.1008997
dblp:conf/sigir/AmitayCLS04
fatcat:creiqewfyfc57jitoxrnlmzmee