Cheap and Fast -- But is it Good? Evaluating Non-Expert Annotations for Natural Language Tasks
Citations
5,227 citations
Cites methods from "Cheap and Fast -- But is it Good? E..."
...Such approaches have been used to produce gold-standard quality training sets (Snow et al., 2008) and also to evaluate learning algorithms on data for which no gold-standard labelings exist (Mintz et al., 2009; Carlson et al., 2010)....
[...]
...Such approaches have been used to produce gold-standard quality training sets (Snow et al., 2008) and also to evaluate learning algorithms on data for which no gold-standard labelings exist (Mintz et al....
[...]
3,842 citations
3,517 citations
Cites background from "Cheap and Fast -- But is it Good? E..."
...For example, Snow et al. (2008) assessed the quality of MTurkers’ responses to several classic human language problems, finding that the quality was no worse than the expert data that most researchers use....
[...]
3,299 citations
2,965 citations
Cites methods from "Cheap and Fast -- But is it Good? E..."
...Human evaluation was performed by evaluators on Amazon’s Mechanical Turk service, shown to be effective for natural language annotation in Snow et al. (2008)....
[...]
References
8,377 citations
2,900 citations
"Cheap and Fast -- But is it Good? E..." refers background or methods in this paper
...In this work we explore the use of Amazon Mechanical Turk1 (AMT) to determine whether nonexpert labelers can provide reliable natural language annotations....
[...]
...Another method is to use Amazon’s compensation mechanisms to give monetary bonuses to highlyperforming workers and deny payments to unreliable ones; this is useful, but beyond the scope of this paper....
[...]
...We employ the Amazon Mechanical Turk system in order to elicit annotations from non-expert labelers....
[...]
...In this section we describe Amazon Mechanical Turk and the general design of our experiments....
[...]
...We demonstrate the effectiveness of using Amazon Mechanical Turk for a variety of natural language annotation tasks....
[...]
2,416 citations
"Cheap and Fast -- But is it Good? E..." refers background in this paper
...Large scale annotation projects such as TreeBank (Marcus et al., 1993), PropBank (Palmer et al., 2005), TimeBank (Pustejovsky et al., 2003), FrameNet (Baker et al., 1998), SemCor (Miller et al., 1993), and others play an important role in natural language processing research, encouraging the…...
[...]
...Large scale annotation projects such as TreeBank (Marcus et al., 1993), PropBank (Palmer et al., 2005), TimeBank (Pustejovsky et al., 2003), FrameNet (Baker et al., 1998), SemCor (Miller et al., 1993), and others play an important role in natural language processing research, encouraging the development of novel ideas, tasks, and algorithms....
[...]
2,365 citations
"Cheap and Fast -- But is it Good? E..." refers methods in this paper
...Luis von Ahn pioneered the collection of data via online annotation tasks in the form of games, including the ESPGame for labeling images (von Ahn and Dabbish, 2004) and Verbosity for annotating word relations (von Ahn et al., 2006)....
[...]
2,190 citations