scispace - formally typeset
Open AccessBook ChapterDOI

txteagle: Mobile Crowdsourcing

Nathan Eagle
- pp 447-456
TLDR
The txteagle system as mentioned in this paper enables people to earn small amounts of money by completing simple tasks on their mobile phone for corporations who pay them in either airtime or MPESA (mobile money).
Abstract
We present txteagle, a system that enables people to earn small amounts of money by completing simple tasks on their mobile phone for corporations who pay them in either airtime or MPESA (mobile money). The system is currently being launched in Kenya and Rwanda in collaboration with the mobile phone service providers Safaricom and MTN Rwanda. Tasks include translation, transcription, and surveys. User studies in Nairobi involving high school students, taxi drivers, and local security guards have been completed and the service has recently launched in Kenya nationwide.

read more

Content maybe subject to copyright    Report

Citations
More filters
Proceedings ArticleDOI

Location-based crowdsourcing: extending crowdsourcing to the real world

TL;DR: Overall the findings show that integrating tasks in the physical world is useful and feasible and issues that should be considered during designing mobile crowdsourcing applications are discussed.
Proceedings ArticleDOI

Crowd-sourced sensing and collaboration using twitter

TL;DR: It is proposed that microblogging services like Twitter can provide an "open" publish-subscribe infrastructure for sensors and smartphones, and pave the way for ubiquitous crowd-sourced sensing and collaboration applications.
Journal ArticleDOI

LocateMe: Magnetic-fields-based indoor localization using smartphones

TL;DR: A dynamic time-warping-based approach that is known to work on similar signals irrespective of their variations in the time axis is followed, resulting in localization distances of approximately 2m--6m with accuracies between 80--100p implying that it is sufficient to walk short distances across hallways to be located by the smartphone.
Journal ArticleDOI

Crowdsourcing Applications for Public Health

TL;DR: Four discrete crowdsourcing approaches are described (knowledge discovery and management; distributed human intelligence tasking; broadcast search; and peer-vetted creative production types) and a number of potential applications for crowdsourcing for public health science and practice are enumerated.
Journal ArticleDOI

Repeated labeling using multiple noisy labelers

TL;DR: The results show clearly that when labeling is not perfect, selective acquisition of multiple labels is a strategy that data miners should have in their repertoire; for certain label-quality/cost regimes, the benefit is substantial.
References
More filters
Proceedings ArticleDOI

Cheap and Fast -- But is it Good? Evaluating Non-Expert Annotations for Natural Language Tasks

TL;DR: This work explores the use of Amazon's Mechanical Turk system, a significantly cheaper and faster method for collecting annotations from a broad base of paid non-expert contributors over the Web, and proposes a technique for bias correction that significantly improves annotation quality on two tasks.
Journal ArticleDOI

Maximum Likelihood Estimation of Observer Error-Rates Using the EM Algorithm

TL;DR: The EM algorithm is shown to provide a slow but sure way of obtaining maximum likelihood estimates of the parameters of interest in compiling a patient record.
Proceedings ArticleDOI

Get another label? improving data quality and data mining using multiple, noisy labelers

TL;DR: The results show clearly that when labeling is not perfect, selective acquisition of multiple labels is a strategy that data miners should have in their repertoire; for certain label-quality/cost regimes, the benefit is substantial.
Proceedings ArticleDOI

Development and Use of a Gold-Standard Data Set for Subjectivity Classifications

TL;DR: Bias-corrected tags are formulated and successfully used to guide a revision of the coding manual and develop an automatic classifier.
Proceedings Article

Inferring Ground Truth from Subjective Labelling of Venus Images

TL;DR: Empirical results suggest that accounting for subjective noise can be quite significant in terms of quantifying both human and algorithm detection performance.