scispace - formally typeset
Search or ask a question
Topic

Crowdsourcing

About: Crowdsourcing is a research topic. Over the lifetime, 12889 publications have been published within this topic receiving 230638 citations.


Papers
More filters
Journal Article
TL;DR: How activation-enabling functionalities can be systematically designed and implemented in an IT-based ideas competition for enterprise resource planning software and found that participation can be supported using a two-step model is described.
Abstract: Ideas competitions appear to be a promising tool for crowdsourcing and open innovation processes, especially for business-to-business software companies. Active participation of potential lead users is the key to success. Yet a look at existing ideas competitions in the software field leads to the conclusion that many information technology (IT)–based ideas competitions fail to meet requirements upon which active participation is established. The paper describes how activation-enabling functionalities can be systematically designed and implemented in an IT-based ideas competition for enterprise resource planning software. We proceeded to evaluate the outcomes of these design measures and found that participation can be supported using a two-step model. The components of the model support incentives and motives of users. Incentives and motives of the users then support the process of activation and consequently participation throughout the ideas competition. This contributes to the successful implementation and maintenance of the ideas competition, thereby providing support for the development of promising innovative ideas. The paper concludes with a discussion of further activation-supporting components yet to be implemented and points to rich possibilities for future research in these areas.

108 citations

Proceedings ArticleDOI
05 May 2012
TL;DR: These results demonstrate that paid crowd workers can reliably generate diverse, high-quality explanations that support the analysis of specific datasets.
Abstract: Web-based social data analysis tools that rely on public discussion to produce hypotheses or explanations of the patterns and trends in data, rarely yield high-quality results in practice. Crowdsourcing offers an alternative approach in which an analyst pays workers to generate such explanations. Yet, asking workers with varying skills, backgrounds and motivations to simply "Explain why a chart is interesting" can result in irrelevant, unclear or speculative explanations of variable quality. To address these problems, we contribute seven strategies for improving the quality and diversity of worker-generated explanations. Our experiments show that using (S1) feature-oriented prompts, providing (S2) good examples, and including (S3) reference gathering, (S4) chart reading, and (S5) annotation subtasks increases the quality of responses by 28% for US workers and 196% for non-US workers. Feature-oriented prompts improve explanation quality by 69% to 236% depending on the prompt. We also show that (S6) pre-annotating charts can focus workers' attention on relevant details, and demonstrate that (S7) generating explanations iteratively increases explanation diversity without increasing worker attrition. We used our techniques to generate 910 explanations for 16 datasets, and found that 63% were of high quality. These results demonstrate that paid crowd workers can reliably generate diverse, high-quality explanations that support the analysis of specific datasets.

108 citations

Journal ArticleDOI
TL;DR: This paper investigates the use of reference sets with predetermined ground-truth to monitor annotators' accuracy and fatigue, all in real-time, on the emotional annotation of the MSP-IMPROV database.
Abstract: Manual annotations and transcriptions have an ever-increasing importance in areas such as behavioral signal processing, image processing, computer vision, and speech signal processing. Conventionally, this metadata has been collected through manual annotations by experts. With the advent of crowdsourcing services, the scientific community has begun to crowdsource many tasks that researchers deem tedious, but can be easily completed by many human annotators. While crowdsourcing is a cheaper and more efficient approach, the quality of the annotations becomes a limitation in many cases. This paper investigates the use of reference sets with predetermined ground-truth to monitor annotators’ accuracy and fatigue, all in real-time. The reference set includes evaluations that are identical in form to the relevant questions that are collected, so annotators are blind to whether or not they are being graded on performance on a specific question. We explore these ideas on the emotional annotation of the MSP-IMPROV database. We present promising results which suggest that our system is suitable for collecting accurate annotations.

108 citations

Journal ArticleDOI
TL;DR: The characteristics of MCS are analyzed, its security threats are identified, and essential requirements on a secure, privacy-preserving, and trustworthy MCS system are outlined.
Abstract: With the popularity of sensor-rich mobile devices (e.g., smart phones and wearable devices), mobile crowdsourcing (MCS) has emerged as an effective method for data collection and processing. Compared with traditional wireless sensor networking, MCS holds many advantages such as mobility, scalability, cost-efficiency, and human intelligence. However, MCS still faces many challenges with regard to security, privacy, and trust. This paper provides a survey of these challenges and discusses potential solutions. We analyze the characteristics of MCS, identify its security threats, and outline essential requirements on a secure, privacy-preserving, and trustworthy MCS system. Further, we review existing solutions based on these requirements and compare their pros and cons. Finally, we point out open issues and propose some future research directions.

108 citations

Proceedings ArticleDOI
17 Jan 2012
TL;DR: In this article, the authors study the design and approximation of optimal crowdsourcing contests, where the principal only benefits from the submission with the highest quality, and show that these contests are 2-approximation to conventional methods for a large family of "regular" distributions.
Abstract: We study the design and approximation of optimal crowdsourcing contests. Crowdsourcing contests can be modeled as all-pay auctions because entrants must exert effort up-front to enter. Unlike all-pay auctions where a usual design objective would be to maximize revenue, in crowdsourcing contests, the principal only benefits from the submission with the highest quality. We give a theory for optimal crowdsourcing contests that mirrors the theory of optimal auction design: the optimal crowdsourcing contest is a virtual valuation optimizer (the virtual valuation function depends on the distribution of contestant skills and the number of contestants). We also compare crowdsourcing contests with more conventional means of procurement. In this comparison, crowdsourcing contests are relatively disadvantaged because the effort of losing contestants is wasted. Nonetheless, we show that crowdsourcing contests are 2-approximations to conventional methods for a large family of "regular" distributions, and 4-approximations, otherwise.

107 citations


Network Information
Related Topics (5)
Social network
42.9K papers, 1.5M citations
87% related
User interface
85.4K papers, 1.7M citations
86% related
Deep learning
79.8K papers, 2.1M citations
85% related
Cluster analysis
146.5K papers, 2.9M citations
85% related
The Internet
213.2K papers, 3.8M citations
85% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023637
20221,420
2021996
20201,250
20191,341
20181,396