A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Measuring Crowdsourcing Effort with Error-Time Curves
2015
Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems - CHI '15
After describing ETA, we explore the metric via four studies: -Study 1: ETA vs. other measures of effort. For ten common microtasking primitives (e.g., multiple choice questions, long-form text entry), we show that the ETA metric represents effort better than existing measures. -Study 2: ETA vs. market price. We then compare ETA as well as other measures to the market prices of these primitives on a crowdsourcing platform. -Study 3: Modeling perceptual costs. By augmenting ETA with measures of
doi:10.1145/2702123.2702145
dblp:conf/chi/ChengTB15
fatcat:xrr3etp7qrha7o2fx2rf2wf6pi