A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2019; you can also visit the original URL.
The file type is application/pdf
.
How Many Workers to Ask?
2016
Proceedings of the 39th International ACM SIGIR conference on Research and Development in Information Retrieval - SIGIR '16
Crowdsourcing has been part of the IR toolbox as a cheap and fast mechanism to obtain labels for system development and evaluation. Successful deployment of crowdsourcing at scale involves adjusting many variables, a very important one being the number of workers needed per human intelligence task (HIT). We consider the crowdsourcing task of learning the answer to simple multiplechoice HITs, which are representative of many relevance experiments. In order to provide statistically significant
doi:10.1145/2911451.2911514
dblp:conf/sigir/AbrahamAKPSS16
fatcat:kjiw4asb4rexhe5xauxrxayvhq