scispace - formally typeset
Journal ArticleDOI

CDAS: a crowdsourcing data analytics system

TLDR
A quality-sensitive answering model is introduced, which guides the crowdsourcing query engine for the design and processing of the corresponding crowdsourcing jobs, and effectively reduces the processing cost while maintaining the required query answer quality.
Abstract
Some complex problems, such as image tagging and natural language processing, are very challenging for computers, where even state-of-the-art technology is yet able to provide satisfactory accuracy. Therefore, rather than relying solely on developing new and better algorithms to handle such tasks, we look to the crowdsourcing solution -- employing human participation -- to make good the shortfall in current technology. Crowdsourcing is a good supplement to many computer tasks. A complex job may be divided into computer-oriented tasks and human-oriented tasks, which are then assigned to machines and humans respectively.To leverage the power of crowdsourcing, we design and implement a Crowdsourcing Data Analytics System, CDAS. CDAS is a framework designed to support the deployment of various crowdsourcing applications. The core part of CDAS is a quality-sensitive answering model, which guides the crowdsourcing engine to process and monitor the human tasks. In this paper, we introduce the principles of our quality-sensitive model. To satisfy user required accuracy, the model guides the crowdsourcing query engine for the design and processing of the corresponding crowdsourcing jobs. It provides an estimated accuracy for each generated result based on the human workers' historical performances. When verifying the quality of the result, the model employs an online strategy to reduce waiting time. To show the effectiveness of the model, we implement and deploy two analytics jobs on CDAS, a twitter sentiment analytics job and an image tagging job. We use real Twitter and Flickr data as our queries respectively. We compare our approaches with state-of-the-art classification and image annotation techniques. The results show that the human-assisted methods can indeed achieve a much higher accuracy. By embedding the quality-sensitive model into crowdsourcing query engine, we effectively reduce the processing cost while maintaining the required query answer quality.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

Truth inference in crowdsourcing: is the problem solved?

TL;DR: It is believed that the truth inference problem is not fully solved, and the limitations of existing algorithms are identified and point out promising research directions.
Journal ArticleDOI

Open challenges for data stream mining research

TL;DR: This article presents a discussion on eight open challenges for data stream mining, which cover the full cycle of knowledge discovery and involve such problems as protecting data privacy, dealing with legacy systems, handling incomplete and delayed information, analysis of complex data, and evaluation of stream mining algorithms.
Proceedings ArticleDOI

Corleone: hands-off crowdsourcing for entity matching

TL;DR: Corleone is described, a HOC solution for EM, which uses the crowd in all major steps of the EM process, and the implications of this work to executing crowdsourced RDBMS joins, cleaning learning models, and soliciting complex information types from crowd workers.
Journal ArticleDOI

Crowdsourced Data Management: A Survey

TL;DR: This paper surveys and synthesizes a wide spectrum of existing studies on crowdsourced data management and outlines key factors that need to be considered to improve crowdsourcing data management.
Proceedings ArticleDOI

Leveraging transitive relations for crowdsourced joins

TL;DR: Zhang et al. as mentioned in this paper proposed a hybrid transitive-relations and crowdsourcing labeling framework which aims to crowdsource the minimum number of pairs to label all the candidate pairs.
References
More filters
Book

Statistical Methods for Research Workers

R. A. Fisher
TL;DR: The prime object of as discussed by the authors is to put into the hands of research workers, and especially of biologists, the means of applying statistical tests accurately to numerical data accumulated in their own laboratories or available in the literature.
Proceedings ArticleDOI

Crowdsourcing user studies with Mechanical Turk

TL;DR: Although micro-task markets have great potential for rapidly collecting user measurements at low costs, it is found that special care is needed in formulating tasks in order to harness the capabilities of the approach.
Proceedings ArticleDOI

Quality management on Amazon Mechanical Turk

TL;DR: This work presents algorithms that improve the existing state-of-the-art techniques, enabling the separation of bias and error, and illustrates how to incorporate cost-sensitive classification errors in the overall framework and how to seamlessly integrate unsupervised and supervised techniques for inferring the quality of the workers.
Posted Content

Modeling public mood and emotion: Twitter sentiment and socio-economic phenomena

TL;DR: This article performed a sentiment analysis of all public tweets broadcasted by Twitter users between August 1 and December 20, 2008 and found that events in the social, political, cultural and economic sphere do have a significant, immediate and highly specific effect on the various dimensions of public mood.
Proceedings Article

Modeling Public Mood and Emotion: Twitter Sentiment and Socio-Economic Phenomena

TL;DR: It is speculated that large scale analyses of mood can provide a solid platform to model collective emotive trends in terms of their predictive value with regards to existing social as well as economic indicators.
Related Papers (5)