A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2019; you can also visit the original URL.
The file type is application/pdf
.
Filters
The Importance of Worker Reputation Information in Microtask-Based Crowd Work Systems
[article]
2016
arXiv
pre-print
This paper presents the first systematic investigation of the potential performance gains for crowd work systems, deriving from available information at the requester about individual worker reputation ...
Our main findings are that: i) even largely inaccurate estimates of workers' reputation can be effectively exploited in the task assignment to greatly improve system performance; ii) the performance of ...
In this paper, we specialize to microtask-based crowd work systems. ...
arXiv:1605.08261v1
fatcat:ekffrbskdjd5vmj6xzxpngfvme
Crowd Work CV: Recognition for Micro Work
[chapter]
2015
Lecture Notes in Computer Science
This lack of information leads to uninformed decisions in selection processes, which have been acknowledged as a promising way to improve the quality of crowd work. ...
Crowd Work CV enables the representation of crowdsourcing agents' identities and promotes their work experience across the different microtask marketplaces. ...
The research leading to these results has received funding from the European Unions Seventh Framework Programme for research, technological development and demonstration under grant agreement no. 611242 ...
doi:10.1007/978-3-319-15168-7_52
fatcat:ipngwfevaveipcbqx7wvgfbdfm
TurkScanner: Predicting the Hourly Wage of Microtasks
2019
The World Wide Web Conference on - WWW '19
Workers in crowd markets struggle to earn a living. ...
In general, workers are provided with little information about tasks, and are left to rely on noisy signals, such as textual description of the task or rating of the requester. ...
TurkScanner: Hourly Wage Prediction TurkScanner predicts the hourly wages of microtasks in two steps: 1) estimate the working times of microtasks based on HIT, WKR, and REQ, in a machine learning-based ...
doi:10.1145/3308558.3313716
dblp:conf/www/SaitoCSNKB19
fatcat:u4nxnn5nlbbnlbqeo7n666bloa
Predicting the Working Time of Microtasks Based on Workers' Perception of Prediction Errors
2019
Human Computation
Evaluation results based on worker perceptions of prediction errors revealed that the proposed model was capable of predicting worker-tolerable working times in 73.6% of all tested microtask cases. ...
Given the limited task-related information provided on crowd platforms, workers often fail to estimate how long it would take to complete certain microtasks. ...
ACKNOWLEDGEMENTS We thank AMT workers for providing microtask-related data and taking surveys for CrowdSense evaluation. ...
doi:10.15346/hc.v6i1.110
fatcat:ktq6pzpdmncllix3yvxsyfg5xu
Subcontracting Microwork
2017
Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems - CHI '17
We reflect on the implications of these findings for the design of future crowd work platforms that effectively harness the potential of subcontracting workflows. ...
Mainstream crowdwork platforms treat microtasks as indivisible units; however, in this article, we propose that there is value in re-examining this assumption. ...
Reputation Models Reputation models are an important component of existing crowd platforms. ...
doi:10.1145/3025453.3025687
dblp:conf/chi/MorrisBBBKLS17
fatcat:npstk6mhlrejfd5vmnevs4exs4
Training Workers for Improving Performance in Crowdsourcing Microtasks
[chapter]
2015
Lecture Notes in Computer Science
We draw motivation from the desire of crowd workers to perform well in order to maintain a good reputation, while attaining monetary rewards successfully. ...
In this paper, we investigate the notion of treating crowd workers as 'learners' in a novel learning environment. ...
This work has been carried out partially in the context of the DURAARK project, funded by the European Commission within the 7th Framework Programme (Grant Agreement no: 600908). ...
doi:10.1007/978-3-319-24258-3_8
fatcat:6gexcx6yfbgudjmwwfimtlcrpy
An Examination of the Work Practices of Crowdfarms
2021
Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems
We report here on interviews of people who work in 53 crowdfarms. ...
We describe how crowdfarms procure jobs, carry out macrotasks and microtasks, manage their reputation, and employ different management practices to motivate crowdworkers and customers. ...
Given the importance of reputation in the selection of requestors on workers [62] , we believe that crowdsourcing platforms should (1) establish distinct rating systems for macro and microtasks to avoid ...
doi:10.1145/3411764.3445603
fatcat:rw6dx7cqljb4xbk3p5gd7jw5xi
Crowd Guilds: Worker-led Reputation and Feedback on Crowdsourcing Platforms
[article]
2016
arXiv
pre-print
In this paper, we draw inspiration from historical worker guilds (e.g., in the silk trade) to design and implement crowd guilds: centralized groups of crowd workers who collectively certify each other's ...
Crowd guilds produced reputation signals more strongly correlated with ground-truth worker quality than signals available on current crowd working platforms, and more accurate than in the traditional model ...
ACKNOWLEDGEMENT We thank the following members of the ...
arXiv:1611.01572v2
fatcat:zn2vdnlforahpcsziytvlmlyyy
Identifying Redundancy and Exposing Provenance in Crowdsourced Data Analysis
2013
IEEE Transactions on Visualization and Computer Graphics
We take advantage of the fact that, for many types of data, independent crowd workers can readily perform basic analysis tasks like examining views and generating explanations for trends and patterns. ...
We present a system that lets analysts use paid crowd workers to explore data sets and helps analysts interactively examine and build upon workers' insights. ...
Working in parallel, crowd workers can help identify important views, generate diverse sets of explanations for trends and outliers, and-in some cases-even provide important domain expertise that the analyst ...
doi:10.1109/tvcg.2013.164
pmid:24051786
fatcat:4f3uldxb25gr7e3vqnizq7a7zm
How to Manage Crowdsourcing Platforms Effectively?
2017
California Management Review
We investigate the effectiveness of these governance mechanisms in 19 case studies and recommend specific configurations of these mechanisms for each of the four crowdsourcing approaches. ...
To profit from crowdsourcing, organizations can engage in four different approaches: microtasking, information pooling, broadcast search, and open collaboration. ...
Such systems address the desire of contributors to stick out of the community of contributors. In microtasking, reputation systems can be effectively combined with financial rewards. ...
doi:10.1177/0008125617738255
fatcat:ujt42qhrb5dy3h7pwsosek3pfe
Multi-Object Classification via Crowdsourcing With a Reject Option
2017
IEEE Transactions on Signal Processing
We consider an oblivious and an expurgation strategy to deal with greedy workers, developing an algorithm to adaptively switch between the two based on the estimated fraction of greedy workers in the anonymous ...
Consider designing an effective crowdsourcing system for an M-ary classification task. Crowd workers complete simple binary microtasks whose results are aggregated to give the final result. ...
In this section, we study the performance of this reward-based crowdsourcing system where a part of the crowd completes all the microtasks based on random guesses. ...
doi:10.1109/tsp.2016.2630038
fatcat:cc23wkuwkzh5zg2qb5wxjwcdsi
Community building on crowdwork platforms: Autonomy and control of online workers?
2020
Competition & Change
While breaking to some extent the sociotechnical isolation of the crowd, the article suggests that such company-based worker forums outsource managerial tasks to the online workers. ...
and help themselves with the work system as they find it. Workers' behaviours are, however, not only the product of managerial systems as they develop practices also independent of them. ...
doi:10.1177/1024529420914472
fatcat:xd2ovvt5wjaprc66dsbqip6k4i
Scaling requirements extraction to the crowd: Experiments with privacy policies
2014
2014 IEEE 22nd International Requirements Engineering Conference (RE)
In these experiments, we carefully balance worker payment and overall cost, as well as worker training and data quality to study the feasibility of distributing requirements extraction to the crowd. ...
The final evaluation shows a 60% reduction in the cost of manual extraction with a 16% increase in extraction coverage. ...
Each assignment is a separate microtask instance in the workflow. We developed a grading system to evaluate worker performance in experiment E3. ...
doi:10.1109/re.2014.6912258
dblp:conf/re/BreauxS14
fatcat:j2h7l3bz2fbajmopapgqfsmirm
Crowd intelligence in AI 2.0 era
2017
Frontiers of Information Technology & Electronic Engineering
The Internet based cyber-physical world has profoundly changed the information environment for the development of artificial intelligence (AI), bringing a new wave of AI research and promoting it into ...
In this paper, we survey existing studies of crowd intelligence. ...
With the increase in the scale of crowd systems, it is challenging to coordinate the work process of massive crowds to handle complex tasks. ...
doi:10.1631/fitee.1601859
fatcat:x6aijr7ud5eojjuelaq3sv4v7a
CrowdMap: Crowdsourcing Ontology Alignment with Microtasks
[chapter]
2012
Lecture Notes in Computer Science
the quality of the results obtained from the crowd. ...
For a given pair of ontologies, CROWDMAP translates the alignment problem into microtasks that address individual alignment questions, publishes the microtasks on an online labor market, and evaluates ...
We would like to thank the self-service team of CrowdFlower, for their technical support on the CrowdFlower API. ...
doi:10.1007/978-3-642-35176-1_33
fatcat:i3w5lzghjbek3g3tyspbcjdlqa
« Previous
Showing results 1 — 15 out of 227 results