scispace - formally typeset
Search or ask a question
Topic

Crowdsourcing

About: Crowdsourcing is a research topic. Over the lifetime, 12889 publications have been published within this topic receiving 230638 citations.


Papers
More filters
Proceedings Article
01 Jan 2011
TL;DR: MobileWorks provides human optical character recognition (OCR) tasks that can be completed by workers on low-end mobile phones through a web browser and finds that workers using Mobile Works average 120 tasks per hour at an accuracy rate of 99% using a multiple entry solution.
Abstract: Existing crowdsourcing markets are often inaccessible to workers living at the bottom of the economic pyramid. We present MobileWorks, a mobile phone-based crowdsourcing platform intended to provide employment to developing world users. MobileWorks provides human optical character recognition (OCR) tasks that can be completed by workers on low-end mobile phones through a web browser. To address the limited screen resolution available on low-end phones, MobileWorks divides documents into many small pieces and sends each piece to a different worker. An initial pilot study with 10 users over a two month period revealed that it is feasible to do basic OCR tasks using a simple mobile web-based application. We find that workers using Mobile Works average 120 tasks per hour at an accuracy rate of 99% using a multiple entry solution. In addition, users had a positive experience with MobileWorks: all study participants would recommend MobileWorks to friends and family.

96 citations

Proceedings ArticleDOI
03 Nov 2015
TL;DR: This paper shows that a baseline approach that performs a task-matching first, and subsequently schedules the tasks assigned per worker in a following phase, does not perform well, and adds a third phase in which to improve the assignment per the output of the scheduling phase, and thus further improves the quality of matching and scheduling.
Abstract: A new platform, termed spatial crowdsourcing, is emerging which enables a requester to commission workers to physically travel to some specified locations to perform a set of spatial tasks (i.e., tasks related to a geographical location and time). The current approach is to formulate spatial crowdsourcing as a matching problem between tasks and workers; hence the primary objective of the existing solutions is to maximize the number of matched tasks. Our goal is to solve the spatial crowdsourcing problem in the presence of multiple workers where we optimize for both travel cost and the number of completed tasks, while taking the tasks' expiration times into consideration. The challenge is that the solution should be a mixture of task-matching and task-scheduling, which are fundamentally different. In this paper, we show that a baseline approach that performs a task-matching first, and subsequently schedules the tasks assigned per worker in a following phase, does not perform well. Hence, we add a third phase in which we iterate back to the matching phase to improve the assignment per the output of the scheduling phase, and thus further improves the quality of matching and scheduling. Even though this 3-phase approach generates high quality results, it is very slow and does not scale. Hence, to scale our algorithm to large number of workers and tasks, we propose a Bisection-based framework which recursively divides all the workers and tasks into different partitions such that assignment and scheduling can be performed locally in a much smaller and promising space. Our experiments show that this approach is three orders of magnitude faster than the 3-phase approach while it only sacrifices 4% of the results' quality.

95 citations

Proceedings ArticleDOI
07 Aug 2017
TL;DR: A new version of this test collection for entity search, DBpedia-Entity v2, is developed and released, which uses a more recent DBpedia dump and a unified candidate result pool from the same set of retrieval models.
Abstract: The DBpedia-entity collection has been used as a standard test collection for entity search in recent years. We develop and release a new version of this test collection, DBpedia-Entity v2, which uses a more recent DBpedia dump and a unified candidate result pool from the same set of retrieval models. Relevance judgments are also collected in a uniform way, using the same group of crowdsourcing workers, following the same assessment guidelines. The result is an up-to-date and consistent test collection.To facilitate further research, we also provide details about the pre-processing and indexing steps, and include baseline results from both classical and recently developed entity search methods.

95 citations

Journal ArticleDOI
12 Apr 2019-PeerJ
TL;DR: This multi-disciplinary review defines crowdsourcing for medicine, identifies conceptual antecedents (collective intelligence and open source models), and explores implications of the approach.
Abstract: Crowdsourcing shifts medical research from a closed environment to an open collaboration between the public and researchers. We define crowdsourcing as an approach to problem solving which involves an organization having a large group attempt to solve a problem or part of a problem, then sharing solutions. Crowdsourcing allows large groups of individuals to participate in medical research through innovation challenges, hackathons, and related activities. The purpose of this literature review is to examine the definition, concepts, and applications of crowdsourcing in medicine. This multi-disciplinary review defines crowdsourcing for medicine, identifies conceptual antecedents (collective intelligence and open source models), and explores implications of the approach. Several critiques of crowdsourcing are also examined. Although several crowdsourcing definitions exist, there are two essential elements: (1) having a large group of individuals, including those with skills and those without skills, propose potential solutions; (2) sharing solutions through implementation or open access materials. The public can be a central force in contributing to formative, pre-clinical, and clinical research. A growing evidence base suggests that crowdsourcing in medicine can result in high-quality outcomes, broad community engagement, and more open science.

95 citations

Proceedings ArticleDOI
08 Sep 2013
TL;DR: The results show that altruistic use, such as for crowdsourcing, is feasible on public displays, and through the controlled use of motivational design and validation check mechanisms, performance can be improved.
Abstract: This study is the first attempt to investigate altruistic use of interactive public displays in natural usage settings as a crowdsourcing mechanism. We test a non-paid crowdsourcing service on public displays with eight different motivation settings and analyse users' behavioural patterns and crowdsourcing performance (e.g., accuracy, time spent, tasks completed). The results show that altruistic use, such as for crowdsourcing, is feasible on public displays, and through the controlled use of motivational design and validation check mechanisms, performance can be improved. The results shed insights on three research challenges in the field: i) how does crowdsourcing performance on public displays compare to that of online crowdsourcing, ii) how to improve the quality of feedback collected from public displays which tends to be noisy, and iii) identify users' behavioural patterns towards crowdsourcing on public displays in natural usage settings.

95 citations


Network Information
Related Topics (5)
Social network
42.9K papers, 1.5M citations
87% related
User interface
85.4K papers, 1.7M citations
86% related
Deep learning
79.8K papers, 2.1M citations
85% related
Cluster analysis
146.5K papers, 2.9M citations
85% related
The Internet
213.2K papers, 3.8M citations
85% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023637
20221,420
2021996
20201,250
20191,341
20181,396