scispace - formally typeset
Search or ask a question
Topic

Crowdsourcing

About: Crowdsourcing is a research topic. Over the lifetime, 12889 publications have been published within this topic receiving 230638 citations.


Papers
More filters
Proceedings ArticleDOI
19 Oct 2009
TL;DR: A crowdsourceable framework to quantify the QoE of multimedia content is proposed, which enables crowdsourcing because it supports systematic verification of participants' inputs; the rating procedure is simpler than that of MOS, so there is less burden on participants; and it derives interval-scale scores that enable subsequent quantitative analysis andQoE provisioning.
Abstract: Until recently, QoE (Quality of Experience) experiments had to be conducted in academic laboratories; however, with the advent of ubiquitous Internet access, it is now possible to ask an Internet crowd to conduct experiments on their personal computers. Since such a crowd can be quite large, crowdsourcing enables researchers to conduct experiments with a more diverse set of participants at a lower economic cost than would be possible under laboratory conditions. However, because participants carry out experiments without supervision, they may give erroneous feedback perfunctorily, carelessly, or dishonestly, even if they receive a reward for each experiment. In this paper, we propose a crowdsourceable framework to quantify the QoE of multimedia content. The advantages of our framework over traditional MOS ratings are: 1) it enables crowdsourcing because it supports systematic verification of participants' inputs; 2) the rating procedure is simpler than that of MOS, so there is less burden on participants; and 3) it derives interval-scale scores that enable subsequent quantitative analysis and QoE provisioning. We conducted four case studies, which demonstrated that, with our framework, researchers can outsource their QoE evaluation experiments to an Internet crowd without risking the quality of the results; and at the same time, obtain a higher level of participant diversity at a lower monetary cost.

213 citations

Journal ArticleDOI
TL;DR: This work provides a conceptual framework for gamified crowdsourcing systems in order to understand and conceptualize the key aspects of the phenomenon and indicates that gamification has been an effective approach for increasing crowdsourcing participation and the quality of the crowdsourced work.
Abstract: Two parallel phenomena are gaining attention in human–computer interaction research: gamification and crowdsourcing Because crowdsourcing's success depends on a mass of motivated crowdsourcees, crowdsourcing platforms have increasingly been imbued with motivational design features borrowed from games; a practice often called gamification While the body of literature and knowledge of the phenomenon have begun to accumulate, we still lack a comprehensive and systematic understanding of conceptual foundations, knowledge of how gamification is used in crowdsourcing, and whether it is effective We first provide a conceptual framework for gamified crowdsourcing systems in order to understand and conceptualize the key aspects of the phenomenon The paper's main contributions are derived through a systematic literature review that investigates how gamification has been examined in different types of crowdsourcing in a variety of domains This meticulous mapping, which focuses on all aspects in our framework, enables us to infer what kinds of gamification efforts are effective in different crowdsourcing approaches as well as to point to a number of research gaps and lay out future research directions for gamified crowdsourcing systems Overall, the results indicate that gamification has been an effective approach for increasing crowdsourcing participation and the quality of the crowdsourced work; however, differences exist between different types of crowdsourcing: the research conducted in the context of crowdsourcing of homogenous tasks has most commonly used simple gamification implementations, such as points and leaderboards, whereas crowdsourcing implementations that seek diverse and creative contributions employ gamification with a richer set of mechanics

212 citations

Journal ArticleDOI
01 Sep 2014
TL;DR: The findings highlight the need for more significant empirical results through large-scale online experiments, an improved dialog with mainstream recommender systems research, and the integration of various sources of knowledge that exceed the boundaries of individual systems.
Abstract: Crowdsourcing information systems are socio-technical systems that provide informational products or services by harnessing the diverse potential of large groups of people via the Web. Interested individuals can contribute to such systems by selecting among a wide range of open tasks. Arguing that current approaches are suboptimal in terms of matching tasks and contributors' individual interests and capabilities, this article advocates the introduction of personalized task recommendation mechanisms. We contribute to a conceptual foundation for the design of such mechanisms by conducting a systematic review of the corresponding academic literature. Based on the insights derived from this analysis, we identify a number of issues for future research. In particular, our findings highlight the need for more significant empirical results through large-scale online experiments, an improved dialog with mainstream recommender systems research, and the integration of various sources of knowledge that exceed the boundaries of individual systems.

212 citations

Journal ArticleDOI
TL;DR: In this article, the authors present an analysis of Mechanical Turk, one of the most popular crowdsourcing sites, to illuminate how Amazon's platform enables an array of companies to access digital labour at low cost and without any of the associated social protection or moral obligation.
Abstract: Crowd employment platforms enable firms to source labour and expertise by leveraging Internet technology. Rather than offshoring jobs to low-cost geographies, functions once performed by internal employees can be outsourced to an undefined pool of digital labour using a virtual network. This enables firms to shift costs and offload risk as they access a flexible, scalable workforce that sits outside the traditional boundaries of labour laws and regulations. The micro-tasks of ‘clickwork’ are tedious, repetitive and poorly paid, with remuneration often well below minimum wage. This article will present an analysis of one of the most popular crowdsourcing sites—Mechanical Turk—to illuminate how Amazon's platform enables an array of companies to access digital labour at low cost and without any of the associated social protection or moral obligation.

212 citations

Book ChapterDOI
08 Oct 2016
TL;DR: A new crowdsourced dataset containing 110,988 images from 56 cities, and 1,170,000 pairwise comparisons provided by 81,630 online volunteers along six perceptual attributes are introduced, showing that crowdsourcing combined with neural networks can produce urban perception data at the global scale.
Abstract: Computer vision methods that quantify the perception of urban environment are increasingly being used to study the relationship between a city’s physical appearance and the behavior and health of its residents. Yet, the throughput of current methods is too limited to quantify the perception of cities across the world. To tackle this challenge, we introduce a new crowdsourced dataset containing 110,988 images from 56 cities, and 1,170,000 pairwise comparisons provided by 81,630 online volunteers along six perceptual attributes: safe, lively, boring, wealthy, depressing, and beautiful. Using this data, we train a Siamese-like convolutional neural architecture, which learns from a joint classification and ranking loss, to predict human judgments of pairwise image comparisons. Our results show that crowdsourcing combined with neural networks can produce urban perception data at the global scale.

211 citations


Network Information
Related Topics (5)
Social network
42.9K papers, 1.5M citations
87% related
User interface
85.4K papers, 1.7M citations
86% related
Deep learning
79.8K papers, 2.1M citations
85% related
Cluster analysis
146.5K papers, 2.9M citations
85% related
The Internet
213.2K papers, 3.8M citations
85% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023637
20221,420
2021996
20201,250
20191,341
20181,396