Filters








12,830 Hits in 3.1 sec

ERICA

Nguyen Quoc Viet Hung, Duong Chi Thang, Matthias Weidlich, Karl Aberer
2015 Proceedings of the 38th International ACM SIGIR Conference on Research and Development in Information Retrieval - SIGIR '15  
It allows us to guide the expert's work by collecting input on the most problematic cases, thereby achieving a set of high quality answers even if the expert does not validate the complete answer set.  ...  To this end, many crowdsourcing platforms feature a post-processing phase, in which crowd answers are validated by experts.  ...  Then, the user decides how many questions are posted to the crowd. After all answers have been collected for the crowdsourcing task, the expert starts the validation.  ... 
doi:10.1145/2766462.2767866 dblp:conf/sigir/HungTWA15 fatcat:xqtzdgctx5fvpkffu2nvjyruxa

Visual Question: Predicting If a Crowd Will Agree on the Answer [article]

Danna Gurari, Kristen Grauman
2016 arXiv   pre-print
We then propose how to exploit this system in a novel application to efficiently allocate human effort to collect answers to visual questions.  ...  We train a model to automatically predict from a visual question whether a crowd would agree on a single answer.  ...  Such approaches aim to collect a pre-specified, fixed number of answers per visual question.  ... 
arXiv:1608.08188v1 fatcat:4y2wwqoehbhstmnr4nmexzzpfu

Efficiently Identifying a Well-Performing Crowd Process for a Given Problem

Patrick M. de Boer, Abraham Bernstein
2017 Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing - CSCW '17  
The black-box nature of our approach may enable us to reduce the entry barrier for efficiently building crowdsourcing solutions.  ...  The black-box nature of our approach may enable us to reduce the entry barrier for efficiently building crowdsourcing solutions.  ...  Additionally, we'd like to express our gratitude to our anonymous reviewers and everybody else involved in the reviewing process, whose constructive feedback was extremely valuable in improving our paper  ... 
doi:10.1145/2998181.2998263 fatcat:jm4znmid4vavbnaczj3wqy7noi

Efficient crowdsourcing of crowd-generated microtasks [article]

Abigail Hotaling, James P. Bagrow
2019 arXiv   pre-print
Crowdsourcers can employ methods to utilize their resources efficiently, but algorithmic approaches to efficient crowdsourcing generally require a fixed task set of known size.  ...  Cost forecasting allows the crowdsourcer to decide between eliciting new tasks from the crowd or receiving responses to existing tasks based on whether or not new tasks will cost less to complete than  ...  When a question is first revealed on the show, the app sends a task containing the question and 4 possible answers to the users. Responses from users and correct answers were collected.  ... 
arXiv:1912.05045v1 fatcat:7jgyr4chx5bkhef6udj4ls37gq

Aggregated knowledge from a small number of debates outperforms the wisdom of large crowds [article]

Joaquin Navajas, Tamara Niella, Gerry Garbulsky, Bahador Bahrami, Mariano Sigman
2017 arXiv   pre-print
Participants first answered individually, then deliberated and made consensus decisions in groups of five, and finally provided revised individual estimates.  ...  We asked a live crowd (N=5180) to respond to general-knowledge questions (e.g., what is the height of the Eiffel Tower?).  ...  For these crowd sizes, the wisdom of crowds is more efficient than aggregating debates.  ... 
arXiv:1703.00045v3 fatcat:jpf6jp4lmfftfjw2rlgktlceii

Generating Requirements Out of Thin Air: Towards Automated Feature Identification for New Apps [article]

Tahira Iqbal and Norbert Seyff and Daniel Mendez Fernández
2019 arXiv   pre-print
However, this manuscript is also intended to foster discussions on the extent to which machine learning can and should be applied to elicit automated requirements on crowd generated data on different forums  ...  Our interview study confirms that practitioners see the need for our envisioned approach. Furthermore, we present an early conceptual solution to discuss the feasibility of our approach.  ...  We cordially invite researchers to join this endeavor to further increase the efficiency of requirements elicitation practices in the future.  ... 
arXiv:1909.11302v1 fatcat:f4wqfx73sjd6rcprokembi2m4q

Efficient crowdsourcing of crowd-generated microtasks

Abigail Hotaling, James P. Bagrow, Haoran Xie
2020 PLoS ONE  
Crowdsourcers can employ methods to utilize their resources efficiently, but algorithmic approaches to efficient crowdsourcing generally require a fixed task set of known size.  ...  Cost forecasting allows the crowdsourcer to decide between eliciting new tasks from the crowd or receiving responses to existing tasks based on whether or not new tasks will cost less to complete than  ...  Algorithmic crowdsourcing focuses on computational approaches to these challenges, allowing crowdsourcers to maximize the accuracy of the data generated by the crowd while also efficiently managing the  ... 
doi:10.1371/journal.pone.0244245 pmid:33332455 fatcat:iebvwzm4bfacboxkwaku5nt34u

Constructing adaptive configuration dialogs using crowd data

Saeideh Hamidi, Periklis Andritsos, Sotirios Liaskos
2014 Proceedings of the 29th ACM/IEEE international conference on Automated software engineering - ASE '14  
We propose a method to construct adaptive configuration elicitation dialogs through utilizing crowd wisdom.  ...  Association rules are used to inform the model about configuration decisions that can be automatically inferred from knowledge already elicited earlier in the dialog.  ...  As opposed to ours, both approaches are based on utility estimation, the latter also focusing on commodity selection rather than configuration.  ... 
doi:10.1145/2642937.2642960 dblp:conf/kbse/HamidiAL14 fatcat:lnz7hwxpfbhcvcga4dekfs3zce

Mobile crowdsourcing - activation of smartphones users to elicit specialized knowledge through worker profile match [article]

Oskar Jarczyk
2015 arXiv   pre-print
Crowdsourcing models applied to work on mobile devices continuously reach new ways of solving sophisticated problems, now with a use of portable advanced devices, where users are not limited to a stationary  ...  In this paper, we propose a model and a short specification of a platform for a bundled widely available crowdsourcing mechanism, which tries to utilize workers individual characteristics to maximum.  ...  Baba et.al in article "Statistical quality estimation for general crowdsourcing tasks" (2013) explain that common approach to tackle the problem of not capable nor motivated workers is to introduce redundancy  ... 
arXiv:1505.07772v1 fatcat:akmjuppeg5elhok72jjjjrngcy

ParkCrowd: Reliable Crowdsensing for Aggregation and Dissemination of Parking Space Information

Fengrui Shi, Di Wu, Dmitri I. Arkhipov, Qiang Liu, Amelia C. Regan, Julie A. McCann
2018 IEEE transactions on intelligent transportation systems (Print)  
To improve the reliability of the information being disseminated, we dynamically evaluate the knowledge of crowd workers based on the veracity of their answers to a series of location-dependent point of  ...  Besides, a joint probabilistic estimator is employed to make inference of parking spaces' future availability based on crowdsensed knowledge.  ...  the lack of an efficient approach for collecting this information in real-time.  ... 
doi:10.1109/tits.2018.2879036 fatcat:hma2zi6ahzadfdpo6najflk3gq

How To Grade a Test Without Knowing the Answers --- A Bayesian Graphical Model for Adaptive Crowdsourcing and Aptitude Testing [article]

Yoram Bachrach, Thore Graepel (Microsoft Research), Tom Minka
2012 arXiv   pre-print
We propose a new probabilistic graphical model that jointly models the difficulties of questions, the abilities of participants and the correct answers to questions in aptitude testing and crowdsourcing  ...  to be asked based on the previous responses.  ...  Figure 3 . 3 Estimates of skill levels for missing information regarding the correct answers to the questions. Figure 4 . 4 Effect of crowd size on correct responses inferred.  ... 
arXiv:1206.6386v1 fatcat:xkljbvejljflxkf7f37rwx67ba

RegionSpeak

Yu Zhong, Walter S. Lasecki, Erin Brady, Jeffrey P. Bigham
2015 Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems - CHI '15  
RegionSpeak can be used to explore the spatial layout of the regions identified. It also demonstrates broad potential for helping blind users to answer difficult spatial layout questions.  ...  Our RegionSpeak system addresses this problem by providing an accessible way for blind users to (i) combine visual information across multiple photographs via image stitching, (ii) quickly collect labels  ...  Our work attempts to improve on existing crowd-powered blind question answering approaches that use either single images or video, by introducing more efficient ways for users to capture visual content  ... 
doi:10.1145/2702123.2702437 dblp:conf/chi/ZhongLBB15 fatcat:qh55backk5gexnncdnaoqvnsfm

Crowdsourcing Similarity Judgments for Agreement Analysis in End-User Elicitation Studies

Abdullah X. Ali, Meredith Ringel Morris, Jacob O. Wobbrock
2018 The 31st Annual ACM Symposium on User Interface Software and Technology - UIST '18  
In this paper, we present Crowdsensus, a crowd-powered tool that enables researchers to efficiently analyze the results of elicitation studies using subjective human judgment and automatic clustering algorithms  ...  We used Crowdsensus to gather similarity judgments of these same 430 commands from 410 online crowd workers.  ...  To support efficient elicitation study analysis, we created Crowdsensus.  ... 
doi:10.1145/3242587.3242621 dblp:conf/uist/AliMW18 fatcat:7egjpwtywrdwfgjgqlmoyisovi

Minimizing Efforts in Validating Crowd Answers

Nguyen Quoc Viet Hung, Duong Chi Thang, Matthias Weidlich, Karl Aberer
2015 Proceedings of the 2015 ACM SIGMOD International Conference on Management of Data - SIGMOD '15  
Our approach allows us to guide the expert's work by collecting input on the most problematic cases, thereby achieving a set of high quality answers even if the expert does not validate the complete answer  ...  Although various techniques for quality control have been proposed, a post-processing phase in which crowd answers are validated is still required.  ...  to collect their answers.  ... 
doi:10.1145/2723372.2723731 dblp:conf/sigmod/HungTWA15 fatcat:o53eepxhfnezhhbbgvenlzm46i

The Wisdom of Crowds: Methods of Human Judgement Aggregation [chapter]

Aidan Lyon, Eric Pacuit
2013 Handbook of Human Computation  
We call this process of getting the inputs out of your crowd elicitation. Your method of elicitation can be crucial for getting the most out of your crowd.  ...  Never before has it been so easy to get a crowd and leverage their collective wisdom for some task.  ... 
doi:10.1007/978-1-4614-8806-4_47 fatcat:oxtu7jnapvflha44vgcamctybq
« Previous Showing results 1 — 15 out of 12,830 results