scispace - formally typeset
Search or ask a question
Topic

Crowdsourcing

About: Crowdsourcing is a research topic. Over the lifetime, 12889 publications have been published within this topic receiving 230638 citations.


Papers
More filters
Journal ArticleDOI
09 Jan 2012
TL;DR: The 2nd SIGIR Workshop on Crowdsourcing for Information Retrieval was held on July 28, 2011 in Beijing, China, in conjunction with the 34th Annual ACM SIGIR Conference to disseminate recent advances in theory, empirical methods, and novel applications of crowdsourcing for information retrieval.
Abstract: The 2nd SIGIR Workshop on Crowdsourcing for Information Retrieval (CIR 2011) was held on July 28, 2011 in Beijing, China, in conjunction with the 34th Annual ACM SIGIR Conference1. The workshop brought together researchers and practitioners to disseminate recent advances in theory, empirical methods, and novel applications of crowdsourcing for information retrieval (IR). The workshop program included three invited talks, a panel discussion entitled Beyond the Lab: State-of-the-Art and Open Challenges in Practical Crowdsourcing, and presentation of nine refereed research papers and one demonstration paper. A Best Paper Award, sponored by Microsoft Bing, was awarded to Jun Wang and Bei Yu for their paper entitled Labeling Images with Queries: A Recall-based Image Retrieval Game Approach. A Crowdsourcing Challenge contest was also announced prior to the workshop, sponsored by CrowdFlower. The contest offered both seed funding and advanced technical support for the winner to use CrowdFlower's services for innovative work. Workshop organizers selected Mark Smucker as the winner based on his proposal entitled: The Crowd vs. the Lab: A Comparison of Crowd-Sourced and University Laboratory Participant Behavior. Proceedings of the workshop are available online2 [15].

60 citations

Journal ArticleDOI
TL;DR: The results show that the crowdsourced classification of remotely sensed imagery is able to generate geographic information about human settlements with a high level of quality and makes clear the different sophistication levels of tasks that can be performed by volunteers and reveals some factors that may have an impact on their performance.
Abstract: In the past few years, volunteers have produced geographic information of different kinds, using a variety of different crowdsourcing platforms, within a broad range of contexts. However, there is still a lack of clarity about the specific types of tasks that volunteers can perform for deriving geographic information from remotely sensed imagery, and how the quality of the produced information can be assessed for particular task types. To fill this gap, we analyse the existing literature and propose a typology of tasks in geographic information crowdsourcing, which distinguishes between classification, digitisation and conflation tasks. We then present a case study related to the “Missing Maps” project aimed at crowdsourced classification to support humanitarian aid. We use our typology to distinguish between the different types of crowdsourced tasks in the project and choose classification tasks related to identifying roads and settlements for an evaluation of the crowdsourced classification. This evaluation shows that the volunteers achieved a satisfactory overall performance (accuracy: 89%; sensitivity: 73%; and precision: 89%). We also analyse different factors that could influence the performance, concluding that volunteers were more likely to incorrectly classify tasks with small objects. Furthermore, agreement among volunteers was shown to be a very good predictor of the reliability of crowdsourced classification: tasks with the highest agreement level were 41 times more probable to be correctly classified by volunteers. The results thus show that the crowdsourced classification of remotely sensed imagery is able to generate geographic information about human settlements with a high level of quality. This study also makes clear the different sophistication levels of tasks that can be performed by volunteers and reveals some factors that may have an impact on their performance.

59 citations

Journal ArticleDOI
TL;DR: The technical framework of Wheelmap, a crowdsourcing platform where volunteers contribute information about wheelchair-accessible places, and information on how it could be used in projects dealing with accessibility and/or multimodal transportation are presented.
Abstract: Crowdsourcing (geo-) information and participatory GIS are among the current hot topics in research and industry. Various projects are implementing participatory sensing concepts within their workflow in order to benefit from the power of volunteers, and improve their product quality and efficiency. Wheelmap is a crowdsourcing platform where volunteers contribute information about wheelchair-accessible places. This article presents information about the technical framework of Wheelmap, and information on how it could be used in projects dealing with accessibility and/or multimodal transportation.

59 citations

Proceedings Article
28 Jun 2013
TL;DR: It is found that gamification can increase workers’ motivation overall and the combination of motivational features also matters, and gamified social achievement is the best performing design over a longer period of time.
Abstract: This paper examines the relationship between motivational design and its longitudinal effects on crowdsourcing systems. In the context of a company internal web site that crowdsources the identification of Twitter accounts owned by company employees, we designed and investigated the effects of various motivational features including individual / social achievements and gamification. Our 6-month experiment with 437 users allowed us to compare the features in terms of both quantity and quality of the work produced by participants over time. While we found that gamification can increase workers’ motivation overall, the combination of motivational features also matters. Specifically, gamified social achievement is the best performing design over a longer period of time. Mixing individual and social achievements turns out to be less effective and can even encourage users to game the system.

59 citations


Network Information
Related Topics (5)
Social network
42.9K papers, 1.5M citations
87% related
User interface
85.4K papers, 1.7M citations
86% related
Deep learning
79.8K papers, 2.1M citations
85% related
Cluster analysis
146.5K papers, 2.9M citations
85% related
The Internet
213.2K papers, 3.8M citations
85% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023637
20221,420
2021996
20201,250
20191,341
20181,396