scispace - formally typeset
Search or ask a question
Topic

Crowdsourcing

About: Crowdsourcing is a research topic. Over the lifetime, 12889 publications have been published within this topic receiving 230638 citations.


Papers
More filters
Proceedings ArticleDOI
11 Feb 2012
TL;DR: This paper investigates whether timely, task-specific feedback helps crowd workers learn, persevere, and produce better results in micro-task platforms by discussing interaction and infrastructure approaches for integrating real-time assessment into online work.
Abstract: Micro-task platforms provide massively parallel, on-demand labor. However, it can be difficult to reliably achieve high-quality work because online workers may behave irresponsibly, misunderstand the task, or lack necessary skills. This paper investigates whether timely, task-specific feedback helps crowd workers learn, persevere, and produce better results. We investigate this question through Shepherd, a feedback system for crowdsourced work. In a between-subjects study with three conditions, crowd workers wrote consumer reviews for six products they own. Participants in the None condition received no immediate feedback, consistent with most current crowdsourcing practices. Participants in the Self-assessment condition judged their own work. Participants in the External assessment condition received expert feedback. Self-assessment alone yielded better overall work than the None condition and helped workers improve over time. External assessment also yielded these benefits. Participants who received external assessment also revised their work more. We conclude by discussing interaction and infrastructure approaches for integrating real-time assessment into online work.

329 citations

Journal ArticleDOI
Paul Whitla1
TL;DR: In this paper, the authors examined how firms are utilising crowdsourcing for the completion of marketing-related tasks, concentrating on the three broad areas of product development, advertising and promotion, and marketing research.
Abstract: Crowdsourcing is a newly developed term which refers to the process of outsourcing of activities by a firm to an online community or crowd in the form of an ‘open call’. Any member of the crowd can then complete an assigned task and be paid for their efforts. Although this form of labour organisation was pioneered in the computing sector, businesses have started to use ‘crowdsourcing’ for a diverse range of tasks that they find can be better completed by members of a crowd rather than by their own employees. This paper examines how firms are utilising crowdsourcing for the completion of marketing-related tasks, concentrating on the three broad areas of product development, advertising and promotion, and marketing research. It is found that some firms are using crowdsourcing to locate large numbers of individuals willing to complete largely menial repetitive tasks for limited financial compensation. Other firms utilise crowdsourcing to solicit solutions to particular tasks from a crowd of diverse and/or expert opinions. Conclusions are drawn regarding the advantages and the limitations of crowdsourcing and the potential for the future use of crowdsourcing in additional marketing-related applications. Keywords: Crowdsourcing, Outsourcing, Wikinomics

327 citations

Proceedings ArticleDOI
04 Feb 2013
TL;DR: This work proposes a new model to predict a gold-standard ranking that hinges on combining pairwise comparisons via crowdsourcing and formalizes this as an active learning strategy that incorporates an exploration-exploitation tradeoff and implements it using an efficient online Bayesian updating scheme.
Abstract: Inferring rankings over elements of a set of objects, such as documents or images, is a key learning problem for such important applications as Web search and recommender systems. Crowdsourcing services provide an inexpensive and efficient means to acquire preferences over objects via labeling by sets of annotators. We propose a new model to predict a gold-standard ranking that hinges on combining pairwise comparisons via crowdsourcing. In contrast to traditional ranking aggregation methods, the approach learns about and folds into consideration the quality of contributions of each annotator. In addition, we minimize the cost of assessment by introducing a generalization of the traditional active learning scenario to jointly select the annotator and pair to assess while taking into account the annotator quality, the uncertainty over ordering of the pair, and the current model uncertainty. We formalize this as an active learning strategy that incorporates an exploration-exploitation tradeoff and implement it using an efficient online Bayesian updating scheme. Using simulated and real-world data, we demonstrate that the active learning strategy achieves significant reductions in labeling cost while maintaining accuracy.

326 citations

BookDOI
09 Aug 2012
TL;DR: The 20 chapters in this paper explore both the theories and applications of crowdsourcing for geographic knowledge production with three sections focusing on VGI, public participation, and citizen science; Geographic Knowledge Production and Place Inference; Emerging Applications and New Challenges.
Abstract: The phenomenon of volunteered geographic information is part of a profound transformation in how geographic data, information, and knowledge are produced and circulated. By situating volunteered geographic information (VGI) in the context of big-data deluge and the data-intensive inquiry, the 20 chapters in this book explore both the theories and applications of crowdsourcing for geographic knowledge production with three sections focusing on 1). VGI, Public Participation, and Citizen Science; 2). Geographic Knowledge Production and Place Inference; and 3). Emerging Applications and New Challenges. This book argues that future progress in VGI research depends in large part on building strong linkages with diverse geographic scholarship. Contributors of this volume situate VGI research in geographys core concerns with space and place, and offer several ways of addressing persistent challenges of quality assurance in VGI. This book positions VGI as part of a shift toward hybrid epistemologies, and potentially a fourth paradigm of data-intensive inquiry across the sciences. It also considers the implications of VGI and the exaflood for further time-space compression and new forms, degrees of digital inequality, the renewed importance of geography, and the role of crowdsourcing for geographic knowledge production.

326 citations

Journal ArticleDOI
TL;DR: The purpose of the research is to establish if an automatic discovery process of relevant and credible news events can be achieved and to focus on the analysis of information credibility on Twitter.
Abstract: Purpose – Twitter is a popular microblogging service which has proven, in recent years, its potential for propagating news and information about developing events. The purpose of this paper is to focus on the analysis of information credibility on Twitter. The purpose of our research is to establish if an automatic discovery process of relevant and credible news events can be achieved. Design/methodology/approach – The paper follows a supervised learning approach for the task of automatic classification of credible news events. A first classifier decides if an information cascade corresponds to a newsworthy event. Then a second classifier decides if this cascade can be considered credible or not. The paper undertakes this effort training over a significant amount of labeled data, obtained using crowdsourcing tools. The paper validates these classifiers under two settings: the first, a sample of automatically detected Twitter “trends” in English, and second, the paper tests how well this model transfers to...

319 citations


Network Information
Related Topics (5)
Social network
42.9K papers, 1.5M citations
87% related
User interface
85.4K papers, 1.7M citations
86% related
Deep learning
79.8K papers, 2.1M citations
85% related
Cluster analysis
146.5K papers, 2.9M citations
85% related
The Internet
213.2K papers, 3.8M citations
85% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023637
20221,420
2021996
20201,250
20191,341
20181,396