A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2014; you can also visit the original URL.
The file type is application/pdf
.
Crowdsourcing for book search evaluation
2011
Proceedings of the 34th international ACM SIGIR conference on Research and development in Information - SIGIR '11
The evaluation of information retrieval (IR) systems over special collections, such as large book repositories, is out of reach of traditional methods that rely upon editorial relevance judgments. Increasingly, the use of crowdsourcing to collect relevance labels has been regarded as a viable alternative that scales with modest costs. However, crowdsourcing suffers from undesirable worker practices and low quality contributions. In this paper we investigate the design and implementation of
doi:10.1145/2009916.2009947
dblp:conf/sigir/KazaiKKM11
fatcat:5ekvpblhrfdehkmey76nkpjlbq