Filters








2,798 Hits in 4.9 sec

University of Glasgow (qirdcsuog) at TREC Crowdsourcing 2011: TurkRank-Network-based Worker Ranking in Crowdsourcing

Stewart Whiting, Jesus A. Rodriguez Perez, Guido Zuccon, Teerapong Leelanupab, Joemon M. Jose
2011 Text Retrieval Conference  
The TurkRank score calculated for each worker is incorporated with a worker-weighted mean label aggregation.  ...  For TREC Crowdsourcing 2011 (Stage 2) we propose a networkbased approach for assigning an indicative measure of worker trustworthiness in crowdsourced labelling tasks.  ...  NIST-Assessed Gold Standard Set Worker Label Aggregation A weighted mean is used to incorporate the worker TurkRank in label aggregation, thus emphasising label contributions from more trustworthy workers  ... 
dblp:conf/trec/WhitingPZLJ11 fatcat:rz54b4fosbd7nk3e5vj4rqg4ya

Getting by with a Little Help from the Crowd

Babak Loni, Jonathon Hare, Mihai Georgescu, Michael Riegler, Xiaofei Zhu, Mohamed Morchid, Richard Dufour, Martha Larson
2014 Proceedings of the 2014 International ACM Workshop on Crowdsourcing for Multimedia - CrowdMM '14  
This paper studies how crowdsourcing-based approaches to image tag validation can achieve parsimony in their use of human input from the crowd, in the form of votes collected from workers on a crowdsourcing  ...  In short, validation of interpretation-sensitive user tags for social images is possible, with "just a little help from the crowd."  ...  Indeed with loss of almost 2% of the F1 for both labels, the task can be done by only one worker resulting in less costs for the crowdsourcing task.  ... 
doi:10.1145/2660114.2660123 dblp:conf/mm/LoniHGRZMDL14 fatcat:bvfrmie2mfff7cj62wsd55vgu4

Exploiting Heterogeneous Graph Neural Networks with Latent Worker/Task Correlation Information for Label Aggregation in Crowdsourcing [article]

Hanlu Wu, Tengfei Ma, Lingfei Wu, Shouling Ji
2021 arXiv   pre-print
In this paper, we propose a novel framework based on graph neural networks for aggregating crowd labels.  ...  Crowdsourcing has attracted much attention for its convenience to collect labels from non-expert workers instead of experts.  ...  /Task Correlation Information for Label Aggregation in Crowdsourcing  ... 
arXiv:2010.13080v2 fatcat:zwepjyziyfakxmo5fwlg3wxjxy

Community-based bayesian aggregation models for crowdsourcing

Matteo Venanzi, John Guiver, Gabriella Kazai, Pushmeet Kohli, Milad Shokouhi
2014 Proceedings of the 23rd international conference on World wide web - WWW '14  
This paper addresses the problem of extracting accurate labels from crowdsourced datasets, a key challenge in crowdsourcing.  ...  a group of workers with similar confusion matrices.  ...  ACKNOWLEDGMENTS The authors gratefully thank Tom Minka for the support and discussion about the model. Matteo Venanzi would also like to thank Oliver Parson for early discussions about this work.  ... 
doi:10.1145/2566486.2567989 dblp:conf/www/VenanziGKKS14 fatcat:fj4ol7n3z5gj5m562d4jlvhcbm

Computing Crowd Consensus with Partial Agreement

Nguyen Quoc Viet Hung, Huynh Huu Viet, Nguyen Thanh Tam, Matthias Weidlich, Hongzhi Yin, Xiaofang Zhou
2018 IEEE Transactions on Knowledge and Data Engineering  
Crowdsourcing has been widely established as a means to enable human computation at large-scale, in particular for tasks that require manual labelling of large sets of data items.  ...  We also show how this model is instantiated for incremental learning, incorporating new answers from crowd workers as they arrive.  ...  Dependencies between labels (R3) are incorporated by clustering items in the answer aggregation process. Items in a cluster are assumed to be similar and, thus, be assigned the same set of labels.  ... 
doi:10.1109/tkde.2017.2750683 fatcat:mn3qhe32ijabhjyrwzukzksd7i

OpenCrowd: A Human-AI Collaborative Approach for Finding Social Influencers via Open-Ended Answers Aggregation

Ines Arous, Jie Yang, Mourad Khayati, Philippe Cudré-Mauroux
2020 Proceedings of The Web Conference 2020  
To tackle those issues, we present OpenCrowd, a unified Bayesian framework that seamlessly incorporates machine learning and crowdsourcing for effectively finding social influencers.  ...  Using open-ended questions, crowdsourcing provides a cost-effective way to find a large number of social influencers in a short time.  ...  ] , a crowdsourcing framework that considers the topical similarity of tasks based on their textual description for worker reliability inference.  ... 
doi:10.1145/3366423.3380254 dblp:conf/www/ArousYKC20 fatcat:tjrdgeismjcehnbgmqgv355oxi

Semantic Annotation Aggregation with Conditional Crowdsourcing Models and Word Embeddings

Paul Felt, Eric K. Ringger, Kevin D. Seppi
2016 International Conference on Computational Linguistics  
In modern text annotation projects, crowdsourced annotations are often aggregated using item response models or by majority vote.  ...  However, suitable generative data models do not exist for many tasks, such as semantic labeling tasks.  ...  A fundamental tenet of crowdsourcing is that inexpert workers are, in aggregate, trustworthy.  ... 
dblp:conf/coling/FeltRS16 fatcat:7c3paoir6basfakbfi3l2qf5ne

Temporal-aware Language Representation Learning From Crowdsourced Labels [article]

Yang Hao, Xiao Zhai, Wenbiao Ding, Zitao Liu
2021 arXiv   pre-print
In this paper, we propose TACMA, a temporal-aware language representation learning heuristic for crowdsourced labels with multiple annotators.  ...  Learning effective language representations from crowdsourced labels is crucial for many real-world machine learning tasks.  ...  Furthermore, we incorporate the aggregated temporal-aware multi-worker confidence scores from Section 3.3 into the loss function to capture the inconsistency of crowdsourced labels.  ... 
arXiv:2107.07958v1 fatcat:zqqlkfg2enbbrgs667grm4fnty

Sentiment Analysis via Deep Hybrid Textual-Crowd Learning Model

Kamran Ghasedi Dizaji, Heng Huang
2018 PROCEEDINGS OF THE THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE AND THE TWENTY-EIGHTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE  
However, the standard crowdsourcing aggregation models are incompetent when the number of crowd labels per worker is not sufficient to train parameters, or when it is not feasible to collect labels for  ...  In this paper, we propose a novel hybrid model to exploit both crowd and text data for sentiment analysis, consisting of a generative crowdsourcing aggregation model and a deep sentimental autoencoder.  ...  Considering that M crowd workers are hired in the crowdsourcing task, our generative crowd aggregation model has the following form.  ... 
doi:10.1609/aaai.v32i1.11515 fatcat:wxoisto65bdu3bdo5zgzkinlk4

Design Patterns for Hybrid Algorithmic-Crowdsourcing Workflows

Christoph Lofi, Kinda El Maarry
2014 2014 IEEE 16th Conference on Business Informatics  
This is especially true for workflows that transparently combine algorithmic heuristics and dynamically crowdsourced tasks that are performed by human workers, and which promise to solve even more complex  ...  Crowdsourcing has shown to be a powerful technique for overcoming many challenges in data and information processing where current state-of-the-art algorithms are still struggling.  ...  In the Virtual Worker pattern, the judgments of both humans and heuristics are aggregated into a final judgment (i.e. heuristics can transparently replace workers in a crowdsourcing aggregation process  ... 
doi:10.1109/cbi.2014.16 dblp:conf/wecwis/LofiM14 fatcat:2j3oxi36gfcejo3linc5h5z55a

Finding Patterns in Noisy Crowds: Regression-based Annotation Aggregation for Crowdsourced Data

Natalie Parde, Rodney Nielsen
2017 Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing  
We present an aggregation approach that learns a regression model from crowdsourced annotations to predict aggregated labels for instances that have no expert adjudications.  ...  However, crowdsourced labels are often noisier than expert-annotated data, making it difficult to aggregate them meaningfully.  ...  Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.  ... 
doi:10.18653/v1/d17-1204 dblp:conf/emnlp/PardeN17 fatcat:nqqveah2y5ferkjk6m4xlqhy5i

Aggregating Crowdsourced Image Segmentations

Doris Jung Lin Lee, Akash Das Sarma, Aditya G. Parameswaran
2018 AAAI Conference on Human Computation & Crowdsourcing  
In this paper, we evaluate multiple crowdsourced algorithms for the image segmentation problem, including novel worker-aggregation-based methods and retrieval-based methods from prior work.  ...  We characterize the different types of worker errors observed in crowdsourced segmentation, and present a clustering algorithm as a preprocessing step that is able to capture and eliminate errors arising  ...  Experimental Evaluation Dataset Description We collected crowdsourced segmentations from Amazon Mechanical Turk; each HIT consisted of one segmentation task for a specific pre-labeled object in an image  ... 
dblp:conf/hcomp/LeeSP18 fatcat:l57m4lw3ejdbtnyyejwjiz6tqm

Exploiting Worker Correlation for Label Aggregation in Crowdsourcing

Yuan Li, Benjamin I. P. Rubinstein, Trevor Cohn
2019 International Conference on Machine Learning  
From collected noisy worker labels, aggregation models that incorporate worker reliability parameters aim to infer a latent true annotation.  ...  In this paper, we argue that existing crowdsourcing approaches do not sufficiently model worker correlations observed in practical settings; we propose in response an enhanced Bayesian classifier combination  ...  Among models that purely rely on crowdsourced labels to infer the truth, the only one incorporating worker correlation, dBCC (Kim & Ghahramani, 2012) , has limitations that disqualify it for crowdsourcing  ... 
dblp:conf/icml/LiRC19 fatcat:whjf5yvxrrcotllssccn2bvepm

Noise or additional information? Leveraging crowdsource annotation item agreement for natural language tasks

Emily Jamison, Iryna Gurevych
2015 Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing  
In order to reduce noise in training data, most natural language crowdsourcing annotation tasks gather redundant labels and aggregate them into an integrated label, which is provided to the classifier.  ...  For five natural language tasks, we pass item agreement on to the task classifier via soft labeling and low-agreement filtering of the training dataset.  ...  I/82806, and by the Center for Advanced Security Research (www.cased.de).  ... 
doi:10.18653/v1/d15-1035 dblp:conf/emnlp/JamisonG15 fatcat:bqieevviu5darg45pw7dlx4agy

Variational Bayesian Inference for Crowdsourcing Predictions [article]

Desmond Cai, Duc Thien Nguyen, Shiau Hong Lim, Laura Wynter
2020 arXiv   pre-print
In essence, this involves the use of crowdsourcing for function estimation.  ...  In particular, we develop a variational Bayesian technique for two different worker noise models - one that assumes workers' noises are independent and the other that assumes workers' noises have a latent  ...  In this case, the crowdsource workers are technical experts and can possess similar biases which influence their annotation errors.  ... 
arXiv:2006.00778v2 fatcat:s2iblnbq65azhllqokok75ujpe
« Previous Showing results 1 — 15 out of 2,798 results