To re (label), or not to re (label)

C Lin, D Weld - Proceedings of the AAAI Conference on Human …, 2014 - ojs.aaai.org
Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, 2014ojs.aaai.org
One of the most popular uses of crowdsourcing is to provide training data for supervised
machine learning algorithms. Since human annotators often make errors, requesters
commonly ask multiple workers to label each example. But is this strategy always the most
cost effective use of crowdsourced workers? We argue" No"---often classifiers can achieve
higher accuracies when trained with noisy" unilabeled" data. However, in some cases
relabeling is extremely important. We discuss three factors that may make relabeling an …
Abstract
One of the most popular uses of crowdsourcing is to provide training data for supervised machine learning algorithms. Since human annotators often make errors, requesters commonly ask multiple workers to label each example. But is this strategy always the most cost effective use of crowdsourced workers? We argue" No"---often classifiers can achieve higher accuracies when trained with noisy" unilabeled" data. However, in some cases relabeling is extremely important. We discuss three factors that may make relabeling an effective strategy: classifier expressiveness, worker accuracy, and budget.
ojs.aaai.org
Showing the best result for this search. See all results