Topic
Crowdsourcing
About: Crowdsourcing is a research topic. Over the lifetime, 12889 publications have been published within this topic receiving 230638 citations.
Papers published on a yearly basis
Papers
More filters
••
TL;DR: In this paper, the authors report on lessons learned from a 3-year implementation of a highly-praised project-the PetaJakarta.org project, which through real-world implementations have pioneered the use of crowdsourced geospatial data in modern disaster management.
64 citations
••
01 Jun 2014TL;DR: The results show that the paper can leverage cross-cultural information, either through translation or equivalent semantic categories, and build deception classifiers with a performance ranging between 60-70%.
Abstract: In this paper, we address the task of cross-cultural deception detection. Using crowdsourcing, we collect three deception datasets, two in English (one originating from United States and one from India), and one in Spanish obtained from speakers from Mexico. We run comparative experiments to evaluate the accuracies of deception classifiers built for each culture, and also to analyze classification differences within and across cultures. Our results show that we can leverage cross-cultural information, either through translation or equivalent semantic categories, and build deception classifiers with a performance ranging between 60-70%.
64 citations
••
TL;DR: This paper proposes using the popular micro-blogging service Twitter to gather evidence about adverse drug reactions (ADRs) after firstly having identified micro- bloggers that report first-hand experience, and utilized the gold standard annotations from CrowdFlower for automatically training a range of supervised machine learning models to recognize first- hand experience.
64 citations
•
05 Sep 2014TL;DR: It is argued that often classifiers can achieve higher accuracies when trained with noisy "unilabeled" data, and that relabeling is extremely important.
Abstract: One of the most popular uses of crowdsourcing is to provide training data for supervised machine learning algorithms. Since human annotators often make errors, requesters commonly ask multiple workers to label each example. But is this strategy always the most cost effective use of crowdsourced workers? We argue "No" --- often classifiers can achieve higher accuracies when trained with noisy "unilabeled" data. However, in some cases relabeling is extremely important. We discuss three factors that may make relabeling an effective strategy: classifier expressiveness, worker accuracy, and budget.
64 citations
••
27 Apr 2013TL;DR: If and how online crowds can support student learning in the classroom is explored and how scalable, diverse, immediate and often ambiguous and conflicting input from online crowds affects student learning and motivation for project-based innovation work is explored.
Abstract: Industry relies on higher education to prepare students for careers in innovation. Fulfilling this obligation is especially difficult in classroom settings, which often lack authentic interaction with the outside world. Online crowdsourcing has the potential to change this. Our research explores if and how online crowds can support student learning in the classroom. We explore how scalable, diverse, immediate (and often ambiguous and conflicting) input from online crowds affects student learning and motivation for project-based innovation work. In a pilot study with three classrooms, we explore interactions with the crowd at four key stages of the innovation process: needfinding, ideating, testing, and pitching. Students reported that online crowds helped them quickly and inexpensively identify needs and uncover issues with early-stage prototypes, although they favored face-to-face interactions for more contextual feed-back. We share early evidence and discuss implications for creating a socio-technical infrastructure to more effectively use crowdsourcing in education.
64 citations