scispace - formally typeset
Search or ask a question
Topic

Crowdsourcing

About: Crowdsourcing is a research topic. Over the lifetime, 12889 publications have been published within this topic receiving 230638 citations.


Papers
More filters
Proceedings ArticleDOI
07 May 2016
TL;DR: The potential for curiosity as a new type of intrinsic motivational driver to incentivize crowd workers is examined and design crowdsourcing task interfaces that explicitly incorporate mechanisms to induce curiosity and conduct a set of experiments on Amazon's Mechanical Turk.
Abstract: Crowdsourcing systems are designed to elicit help from humans to accomplish tasks that are still difficult for computers. How to motivate workers to stay longer and/or perform better in crowdsourcing systems is a critical question for designers. Previous work have explored different motivational frameworks, both extrinsic and intrinsic. In this work, we examine the potential for curiosity as a new type of intrinsic motivational driver to incentivize crowd workers. We design crowdsourcing task interfaces that explicitly incorporate mechanisms to induce curiosity and conduct a set of experiments on Amazon's Mechanical Turk. Our experiment results show that curiosity interventions improve worker retention without degrading performance, and the magnitude of the effects are influenced by both personal characteristics of the worker and the nature of the task.

89 citations

Journal ArticleDOI
TL;DR: The authors concur with Landers and Behrend's call for a more nuanced view on convenience samples and suggest that we should not "throw the baby out with the bathwater" but rather carefully and empirically examine the advantages and risks associated with using each sampling strategy before classifying it as suitable or not.
Abstract: In their focal article, Landers and Behrend (2015) propose to reevaluate the legitimacy of using the so-called convenience samples (e.g., crowdsourcing, online panels, and student samples) as compared with traditional organizational samples in industrial–organizational (I-O) psychology research. They suggest that such sampling strategies should not be judged as inappropriate per se but that decisions to accept or reject such samples must be empirically or theoretically justified. I concur with Landers and Behrend's call for a more nuanced view on convenience samples. More precisely, I suggest that we should not “throw the baby out with the bathwater” but rather carefully and empirically examine the advantages and risks associated with using each sampling strategy before classifying it as suitable or not.

88 citations

Proceedings ArticleDOI
18 Sep 2011
TL;DR: In this article, the authors discuss the challenges for quality control in ubiquitous crowdsorucing and propose a novel technique that reasons on users mobility patterns and quality of their past contributions to estimate user's credibility.
Abstract: Crowdsourcing has become a successful paradigm in the past decade, as Web 2.0 users have taken a more active role in producing content as well as consuming it. Recently this paradigm has broadened to incorporate ubiquitous applications, in which the smart-phone users contribute information about their surrounding, thus providing a collective knowledge about the physical world. However the acceptance and openness of such applications has made it easy to contribute poor quality content. Various solutions have been proposed for the Web-based domain, to assist with monitoring and filtering poor quality content, but these methods fall short when applied to ubiquitous crowdsourcing, where the task of collecting information has to be performed continuously and in real-time, by an always changing crowd. In this paper we discuss the challenges for quality control in ubiquitous crowdsorucing and propose a novel technique that reasons on users mobility patterns and quality of their past contributions to estimate user's credibility.

88 citations

Proceedings ArticleDOI
03 Apr 2017
TL;DR: Almond is the first virtual assistant that lets users specify trigger-action tasks in natural language, and the experiment suggests that Almond can understand about 40% of the complex tasks when uttered by a user familiar with its capability.
Abstract: This paper presents the architecture of Almond, an open, crowdsourced, privacy-preserving and programmable virtual assistant for online services and the Internet of Things (IoT). Included in Almond is Thingpedia, a crowdsourced public knowledge base of natural language interfaces and open APIs. Our proposal addresses four challenges in virtual assistant technology: generality, interoperability, privacy, and usability. Generality is addressed by crowdsourcing Thingpedia, while interoperability is provided by ThingTalk, a high-level domain-specific language that connects multiple devices or services via open APIs. For privacy, user credentials and user data are managed by our open-source ThingSystem, which can be run on personal phones or home servers. Finally, we address usability by providing a natural language interface, whose capability can be extended via training with the help of a menu-driven interface. We have created a fully working prototype, and crowdsourced a set of 187 functions across 45 different kinds of devices. Almond is the first virtual assistant that lets users specify trigger-action tasks in natural language. Despite the lack of real usage data, our experiment suggests that Almond can understand about 40% of the complex tasks when uttered by a user familiar with its capability.

88 citations

Journal ArticleDOI
TL;DR: In this article, an in-depth qualitative study is presented, focusing on selected users' interactions and experiences of working on two UK-based crowdsourcing platforms, showing that workers engaged in this form of labour exchange need to deploy existing employability skills and networks to effectively meet the challenges, and take advantage of the opportunities, that crowdsourcing presents.
Abstract: The development of a fast and reliable Internet, new technologies online payment systems, and changes in work structure that enable and demand flexible working patterns have driven a move to a new form of Internet-enabled labour exchange called crowdsourcing. Evidence from an in-depth qualitative study is presented, focusing on selected users' interactions and experiences of working on two UK-based crowdsourcing platforms. The paper shows that workers engaged in this form of labour exchange need to deploy existing employability skills and networks to effectively meet the challenges, and take advantage of the opportunities, that crowdsourcing presents. Individual factors and circumstances emerge as paramount for workers' continued engagement in this form of employment. Using selected components from an employability framework, the findings suggest that crowdsourcing can offer new pathways to practising skills and enhancing employability for some workers.

87 citations


Network Information
Related Topics (5)
Social network
42.9K papers, 1.5M citations
87% related
User interface
85.4K papers, 1.7M citations
86% related
Deep learning
79.8K papers, 2.1M citations
85% related
Cluster analysis
146.5K papers, 2.9M citations
85% related
The Internet
213.2K papers, 3.8M citations
85% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023637
20221,420
2021996
20201,250
20191,341
20181,396