Amazon Mechanical Turk: Gold Mine or Coal Mine?
Recently heard at a tutorial in our field: "It cost me less than one hundred bucks to annotate this using Amazon Mechanical Turk!" Assertions like this are increasingly common, but we believe they should not be stated so proudly; they ignore the ethical consequences of using MTurk (Amazon Mechanical Turk) as a source of labour. Manually annotating corpora or manually developing any other linguistic resource, such as a set of judgments about system outputs, represents such a high cost that many
... esearchers are looking for alternative solutions to the standard approach. MTurk is becoming a popular one. However, as in any scientific endeavor involving humans, there is an unspoken ethical dimension involved in resource construction and system evaluation, and this is especially true of MTurk. We would like here to raise some questions about the use of MTurk. To do so, we will define precisely what MTurk is and what it is not, highlighting the issues raised by the system. We hope that this will point out opportunities for our community to deliberately value ethics above cost savings. What is MTurk? What is it not? MTurk is an on-line crowdsourcing, microworking 1 system which enables elementary tasks to be performed by a huge number of on-line people. Ideally, these tasks are meant to be solved by computers, but they still remain out of computational reach (for instance, the translation of an English sentence into Urdu). MTurk is composed of two populations: the Requesters, who launch the tasks to be completed, and the Turkers, who complete these tasks. Requesters create the so-called "HITs" (Human Intelligence Tasks), which are elementary components of complex tasks. The art of the requesters is to split complex tasks into basic steps and to fix a reward, usually very low (for instance *