Limitations of Majority Agreement in Crowdsourced Image Interpretation
read more
Citations
The tasks of the crowd : a typology of tasks in geographic information crowdsourcing and a case study in humanitarian mapping
A taxonomy of quality assessment methods for volunteered and crowdsourced geographic information.
Mapping Human Settlements with Higher Accuracy and Less Volunteer Efforts by Combining Crowdsourcing and Deep Learning
Citizen Science for Observing and Understanding the Earth
Recent Advances in Forest Observation with Visual Interpretation of Very High-Resolution Imagery
References
Amazon's Mechanical Turk A New Source of Inexpensive, Yet High-Quality, Data?
Citizens as sensors: the world of volunteered geography
Labeling images with a computer game
Citizen Science: A Developing Tool for Expanding Science Knowledge and Scientific Literacy
A new dawn for citizen science.
Related Papers (5)
Frequently Asked Questions (5)
Q2. What is the common reward structure for crowdsourcing campaigns?
The simplest reward structure for crowdsourcing campaigns is to award points uniformly for each task completed, or for each task completed successfully.
Q3. How many images were left unchanged after the consensus building exercise?
After the consensus building exercise, the ratings of 97.4% of the images on which the experts agreed were left unchanged if ratings of ‘maybe’ are considered to indicate that an image is impossible.
Q4. How many images did the volunteers see more than once?
Normally a volunteer saw about 2% of images more than once, although a few volunteers contributed more ratings than there were images, so necessarily had a substantial number of repeat views.
Q5. What is the first method to estimate the correct response and the error rate of each rater?
Dawid and Skene (1979) proposed what may have been the first algorithm attempting to do this, using an iterative maximum likelihood method to simultaneously estimate the correct response and the error rate of each rater.