scispace - formally typeset
Search or ask a question
Topic

Crowdsourcing

About: Crowdsourcing is a research topic. Over the lifetime, 12889 publications have been published within this topic receiving 230638 citations.


Papers
More filters
Proceedings ArticleDOI
27 Apr 2013
TL;DR: It is suggested that a careful combination of methods that increase social transparency and different reward schemes can significantly improve crowdsourcing outcomes.
Abstract: This paper studied how social transparency and different peer-dependent reward schemes (i.e., individual, teamwork, and competition) affect the outcomes of crowdsourcing. The results showed that when social transparency was increased by asking otherwise anonymous workers to share their demographic information (e.g., name, nationality) to the paired worker, they performed significantly better. A more detailed analysis showed that in a teamwork reward scheme, in which the reward of the paired workers depended only on the collective outcomes, increasing social transparency could offset effects of social loafing by making them more accountable to their teammates. In a competition reward scheme, in which workers competed against each other and the reward depended on how much they outperformed their opponent, increasing social transparency could augment effects of social facilitation by providing more incentives for them to outperform their opponent. The results suggested that a careful combination of methods that increase social transparency and different reward schemes can significantly improve crowdsourcing outcomes.

63 citations

Journal ArticleDOI
TL;DR: The authors describe CrowdSC's process model and evaluate three execution strategies and let users combine data collection, selection, and assessment activities in a crowdsourcing process to achieve sophisticated goals within a predefined context.
Abstract: A platform that connects citizens effectively to local government, letting them contribute to their community's general well-being, would be an elegant way to make cities smarter. CrowdSC is a crowdsourcing framework designed for smarter cities. The framework lets users combine data collection, selection, and assessment activities in a crowdsourcing process to achieve sophisticated goals within a predefined context. Depending upon this process's execution strategy, different outcomes are possible. The authors describe CrowdSC's process model and evaluate three execution strategies.

63 citations

Journal ArticleDOI
TL;DR: In this article, the authors present key questions that every organization considering the use of crowdsourcing must address, and offer specific recommendations for those organizations that choose to employ a crowd to meet their needs, based on an extensive review of both the research literature and the practitioner literature.

63 citations

Proceedings ArticleDOI
21 Apr 2020
TL;DR: Twitter A11y increases access to social media platforms for people with visual impairments by providing high-quality automatic descriptions for user-posted images by increasing alt-text coverage from 7.6% to 78.5%, before crowdsourcing descriptions for the remaining images.
Abstract: Social media platforms are integral to public and private discourse, but are becoming less accessible to people with vision impairments due to an increase in user-posted images. Some platforms (i.e. Twitter) let users add image descriptions (alternative text), but only 0.1% of images include these. To address this accessibility barrier, we created Twitter A11y, a browser extension to add alternative text on Twitter using six methods. For example, screenshots of text are common, so we detect textual images, and create alternative text using optical character recognition. Twitter A11y also leverages services to automatically generate alternative text or reuse them from across the web. We compare the coverage and quality of Twitter A11y's six alt-text strategies by evaluating the timelines of 50 self-identified blind Twitter users. We find that Twitter A11y increases alt-text coverage from 7.6% to 78.5%, before crowdsourcing descriptions for the remaining images. We estimate that 57.5% of returned descriptions are high-quality. We then report on the experiences of 10 participants with visual impairments using the tool during a week-long deployment. Twitter A11y increases access to social media platforms for people with visual impairments by providing high-quality automatic descriptions for user-posted images.

63 citations

01 Jan 2014
TL;DR: HCI has a long history of studying not only the interaction between individuals with technology, but also the interaction of groups with or mediated by technology, and there are three main vectors of study for HCI and collective intelligence.
Abstract: The lessons of HCI can therefore be brought to bear on different aspects of collective intelligence. On the one hand, the people in the collective (the crowd) will only contribute if there are proper incentives and if the interface guides them in usable and meaningful ways. On the other, those interested in leveraging the collective need usable ways of coordinating, making sense of, and extracting value from the collective work that is being done, often on their behalf. Ultimately, collective intelligence involves the co-design of technical infrastructure and human-human interaction: a socio-technical system. In crowdsourcing, we might differentiate between two broad classes of users: requesters and crowd members. The requesters are the individuals or group for whom work is done or who takes the responsibility to aggregate the work done by the collective. The crowd member (or crowd worker) is one of many people to contribute. While we often use the word “worker,” crowd workers do not have need to be (and often aren’t) contributing as part of what we might consider standard “work.” They may work for pay or not, work for small periods of time or contribute for days to a project they care about, and they may work in such a way as each individual’s contribution may be difficult to discern from the collective final output. HCI has a long history of studying not only the interaction between individuals with technology, but also the interaction of groups with or mediated by technology. For example, computer-supported cooperative work (CSCW) investigates how to allow groups to accomplish tasks together using a shared or distributed computer interfaces, either at the same time or asynchronously. Current crowdsourcing research alters some of the standard assumptions about the size, composition, and stability of these groups, but the fundamental approaches remain the same. For instance, workers drawn from the crowd may be less reliable than groups of employees working on a shared task, and group membership in the crowd may change more quickly. There are three main vectors of study for HCI and collective intelligence. The first is directed crowdsourcing, where a single individual attempts to recruit and guide a large set of people to help accomplish a goal. The second is collaborative crowdsourcing, where a group gathers based on shared interest and self-determine their organization and work. The third vector is passive crowdsourcing, where the crowd or collective may never meet or coordinate, but it is still possible to mine their collective behavior patterns for information. We cover each vector in turn. We conclude with a list of challenges for researches in HCI related to crowdsourcing and collective intelligence.

63 citations


Network Information
Related Topics (5)
Social network
42.9K papers, 1.5M citations
87% related
User interface
85.4K papers, 1.7M citations
86% related
Deep learning
79.8K papers, 2.1M citations
85% related
Cluster analysis
146.5K papers, 2.9M citations
85% related
The Internet
213.2K papers, 3.8M citations
85% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023637
20221,420
2021996
20201,250
20191,341
20181,396