scispace - formally typeset
Search or ask a question
Topic

Crowdsourcing

About: Crowdsourcing is a research topic. Over the lifetime, 12889 publications have been published within this topic receiving 230638 citations.


Papers
More filters
Proceedings ArticleDOI
21 Mar 2011
TL;DR: This work studies the effectiveness of employing location-based services (such as Foursquare) for finding appropriate people to answer a given location- based query and investigates the feasibility of answering locations-based queries by crowdsourcing over Twitter.
Abstract: Location-based queries are quickly becoming ubiquitous. However, traditional search engines perform poorly for a significant fraction of location-based queries, which are non-factual (i.e., subjective, relative, or multi-dimensional). As an alternative, we investigate the feasibility of answering location-based queries by crowdsourcing over Twitter. More specifically, we study the effectiveness of employing location-based services (such as Foursquare) for finding appropriate people to answer a given location-based query. Our findings give insights for the feasibility of this approach and highlight some research challenges in social search engines.

60 citations

Proceedings Article
28 Jun 2013
TL;DR: It is found that crowdsourced answers are similar in nature and quality to friendsourced answers, and that almost a third of all question askers provided unsolicited positive feedback upon receiving answers from this novel information agent.
Abstract: People have always asked questions of their friends, but now, with social media, they can broadcast their questions to their entire social network. In this paper we study the replies received via Twitter question asking, and use what we learn to create a system that augments naturally occurring “friendsourced” answers with crowdsourced answers. By analyzing of thousands of public Twitter questions and answers, we build a picture of which questions receive answers and the content of their answers. Because many questions seek subjective responses but go unanswered, we use crowdsourcing to augment the Twitter question asking experience. We deploy a system that uses the crowd to identify question tweets, create candidate replies, and vote on the best reply from among different crowd- and friend-generated answers. We find that crowdsourced answers are similar in nature and quality to friendsourced answers, and that almost a third of all question askers provided unsolicited positive feedback upon receiving answers from this novel information agent.

60 citations

Journal ArticleDOI
TL;DR: Evaluating the quality and quantity of data generated by citizens in a remote Kenyan basin and assessing whether crowdsourcing is a suitable method to overcome data scarcity indicates that citizens can provide water level data of sufficient quality and with high temporal resolution.

60 citations

Journal ArticleDOI
TL;DR: It is proved that the TopkTR problem is NP-hard and a two-level-based framework is designed, which includes an approximation algorithm with provable approximation ratio and an exact algorithm with pruning techniques to address it.
Abstract: With the rapid development of mobile internet and online to offline marketing model, various spatial crowdsourcing platforms, such as Gigwalk and Gmission, are getting popular. Most existing studies assume that spatial crowdsourced tasks are simple and trivial. However, many real crowdsourced tasks are complex and need to be collaboratively finished by a team of crowd workers with different skills. Therefore, an important issue of spatial crowdsourcing platforms is to recommend some suitable teams of crowd workers to satisfy the requirements of skills in a task. In this paper, to address the issue, we first propose a more practical problem, called Top-k team recommendation in spatial crowdsourcing (TopkTR) problem. We prove that the TopkTR problem is NP-hard and designs a two-level-based framework, which includes an approximation algorithm with provable approximation ratio and an exact algorithm with pruning techniques to address it. In addition, we study a variant of the TopkTR problem, called TopkTRL, where a team leader is appointed among each recommended team of crowd workers in order to coordinate different crowd workers conveniently, and the aforementioned framework can be extended to address this variant. Finally, we verify the effectiveness and efficiency of the proposed methods through extensive experiments on real and synthetic datasets.

60 citations

Proceedings ArticleDOI
01 Mar 2014
TL;DR: CrowdCleaner is presented, a smart data cleaning system for cleaning multi-version data on the Web, which utilizes crowdsourcing-based approaches for detecting and repairing errors that usually cannot be solved by traditional data integration and cleaning techniques.
Abstract: Multi-version data is often one of the most concerned information on the Web since this type of data is usually updated frequently. Even though there exist some Web information integration systems that try to maintain the latest update version, the maintained multi-version data usually includes inaccurate and invalid information due to the data integration or update delay errors. In this demo, we present CrowdCleaner, a smart data cleaning system for cleaning multi-version data on the Web, which utilizes crowdsourcing-based approaches for detecting and repairing errors that usually cannot be solved by traditional data integration and cleaning techniques. In particular, CrowdCleaner blends active and passive crowdsourcing methods together for rectifying errors for multi-version data. We demonstrate the following four facilities provided by CrowdCleaner: (1) an error-monitor to find out which items (e.g., submission date, price of real estate, etc.) are wrong versions according to the reports from the crowds, which belongs to a passive crowdsourcing strategy; (2) a task-manager to allocate the tasks to human workers intelligently; (3) a smart-decision-maker to identify which answer from the crowds is correct with active crowdsourcing methods; and (4) a whom-to-ask-finder to discover which users (or human workers) should be the most credible according to their answer records.

60 citations


Network Information
Related Topics (5)
Social network
42.9K papers, 1.5M citations
87% related
User interface
85.4K papers, 1.7M citations
86% related
Deep learning
79.8K papers, 2.1M citations
85% related
Cluster analysis
146.5K papers, 2.9M citations
85% related
The Internet
213.2K papers, 3.8M citations
85% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023637
20221,420
2021996
20201,250
20191,341
20181,396