scispace - formally typeset
Search or ask a question
Topic

Crowdsourcing

About: Crowdsourcing is a research topic. Over the lifetime, 12889 publications have been published within this topic receiving 230638 citations.


Papers
More filters
Proceedings ArticleDOI
05 Sep 2012
TL;DR: A new model for privacy is introduced, namely privacy as expectations, which involves using crowdsourcing to capture users' expectations of what sensitive resources mobile apps use and a new privacy summary interface that prioritizes and highlights places where mobile apps break people's expectations.
Abstract: Smartphone security research has produced many useful tools to analyze the privacy-related behaviors of mobile apps. However, these automated tools cannot assess people's perceptions of whether a given action is legitimate, or how that action makes them feel with respect to privacy. For example, automated tools might detect that a blackjack game and a map app both use one's location information, but people would likely view the map's use of that data as more legitimate than the game. Our work introduces a new model for privacy, namely privacy as expectations. We report on the results of using crowdsourcing to capture users' expectations of what sensitive resources mobile apps use. We also report on a new privacy summary interface that prioritizes and highlights places where mobile apps break people's expectations. We conclude with a discussion of implications for employing crowdsourcing as a privacy evaluation technique.

491 citations

Proceedings ArticleDOI
06 Nov 2012
TL;DR: This paper introduces a taxonomy for spatial crowdsourcing, and focuses on one class of this taxonomy, in which workers send their locations to a centralized server and thereafter the server assigns to every worker his nearby tasks with the objective of maximizing the overall number of assigned tasks.
Abstract: With the ubiquity of mobile devices, spatial crowdsourcing is emerging as a new platform, enabling spatial tasks (i.e., tasks related to a location) assigned to and performed by human workers. In this paper, for the first time we introduce a taxonomy for spatial crowdsourcing. Subsequently, we focus on one class of this taxonomy, in which workers send their locations to a centralized server and thereafter the server assigns to every worker his nearby tasks with the objective of maximizing the overall number of assigned tasks. We formally define this maximum task assignment (or MTA) problem in spatial crowdsourcing, and identify its challenges. We propose alternative solutions to address these challenges by exploiting the spatial properties of the problem space. Finally, our experimental evaluations on both real-world and synthetic data verify the applicability of our proposed approaches and compare them by measuring both the number of assigned tasks and the travel cost of the workers.

484 citations

Proceedings Article
01 Jun 2013
TL;DR: SemEval-2013 Task 2: Sentiment Analysis in Twitter as discussed by the authors included two subtasks: A, an expression-level subtask, and B, a message-level subtask.
Abstract: In recent years, sentiment analysis in social media has attracted a lot of research interest and has been used for a number of applications. Unfortunately, research has been hindered by the lack of suitable datasets, complicating the comparison between approaches. To address this issue, we have proposed SemEval-2013 Task 2: Sentiment Analysis in Twitter, which included two subtasks: A, an expression-level subtask, and B, a messagelevel subtask. We used crowdsourcing on Amazon Mechanical Turk to label a large Twitter training dataset along with additional test sets of Twitter and SMS messages for both subtasks. All datasets used in the evaluation are released to the research community. The task attracted significant interest and a total of 149 submissions from 44 teams. The bestperforming team achieved an F1 of 88.9% and 69% for subtasks A and B, respectively.

483 citations

Journal ArticleDOI
TL;DR: It is found that intrinsic motivation was more important than extrinsic motivation in inducing participation in crowdsourcing contests, and it is suggested that crowdsourcing contest tasks should preferably be highly autonomous, explicitly specified, and less complex, as well as require a variety of skills.
Abstract: Firms can seek innovative external ideas and solutions to business tasks by sponsoring co-creation activities such as crowdsourcing. To get optimal solutions from crowdsourcing contest participants, firms need to improve task design and motivate contest solvers' participation in the co-creation process. Based on the theory of extrinsic and intrinsic motivation as well as the theory of job design, we developed a research model to explain participation in crowdsourcing contests, as well as the effects of task attributes on intrinsic motivation. Subjective and objective data were collected from 283 contest solvers at two different time points. We found that intrinsic motivation was more important than extrinsic motivation in inducing participation. Contest autonomy, variety, and analyzability were positively associated with intrinsic motivation, whereas contest tacitness was negatively associated with intrinsic motivation. The findings suggest a balanced view of extrinsic and intrinsic motivation in order to encourage participation in crowdsourcing. We also suggest that crowdsourcing contest tasks should preferably be highly autonomous, explicitly specified, and less complex, as well as require a variety of skills.

477 citations

Proceedings ArticleDOI
03 Apr 2017
TL;DR: A method that combines crowdsourcing and machine learning to analyze personal attacks at scale is developed and illustrated, and an evaluation method for a classifier in terms of the aggregated number of crowd-workers it can approximate is shown.
Abstract: The damage personal attacks cause to online discourse motivates many platforms to try to curb the phenomenon However, understanding the prevalence and impact of personal attacks in online platforms at scale remains surprisingly difficult The contribution of this paper is to develop and illustrate a method that combines crowdsourcing and machine learning to analyze personal attacks at scale We show an evaluation method for a classifier in terms of the aggregated number of crowd-workers it can approximate We apply our methodology to English Wikipedia, generating a corpus of over 100k high quality human-labeled comments and 63M machine-labeled ones from a classifier that is as good as the aggregate of 3 crowd-workers, as measured by the area under the ROC curve and Spearman correlation Using this corpus of machine-labeled scores, our methodology allows us to explore some of the open questions about the nature of online personal attacks This reveals that the majority of personal attacks on Wikipedia are not the result of a few malicious users, nor primarily the consequence of allowing anonymous contributions from unregistered users

472 citations


Network Information
Related Topics (5)
Social network
42.9K papers, 1.5M citations
87% related
User interface
85.4K papers, 1.7M citations
86% related
Deep learning
79.8K papers, 2.1M citations
85% related
Cluster analysis
146.5K papers, 2.9M citations
85% related
The Internet
213.2K papers, 3.8M citations
85% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023637
20221,420
2021996
20201,250
20191,341
20181,396