scispace - formally typeset
Search or ask a question

Showing papers on "Crowdsourcing published in 2010"


Proceedings ArticleDOI
10 Apr 2010
TL;DR: How the worker population has changed over time is described, shifting from a primarily moderate-income, U.S. based workforce towards an increasingly international group with a significant population of young, well-educated Indian workers.
Abstract: Amazon Mechanical Turk (MTurk) is a crowdsourcing system in which tasks are distributed to a population of thousands of anonymous workers for completion. This system is increasingly popular with researchers and developers. Here we extend previous studies of the demographics and usage behaviors of MTurk workers. We describe how the worker population has changed over time, shifting from a primarily moderate-income, U.S.-based workforce towards an increasingly international group with a significant population of young, well-educated Indian workers. This change in population points to how workers may treat Turking as a full-time job, which they rely on to make ends meet.

1,168 citations


Proceedings ArticleDOI
25 Jul 2010
TL;DR: This work presents algorithms that improve the existing state-of-the-art techniques, enabling the separation of bias and error, and illustrates how to incorporate cost-sensitive classification errors in the overall framework and how to seamlessly integrate unsupervised and supervised techniques for inferring the quality of the workers.
Abstract: Crowdsourcing services, such as Amazon Mechanical Turk, allow for easy distribution of small tasks to a large number of workers. Unfortunately, since manually verifying the quality of the submitted results is hard, malicious workers often take advantage of the verification difficulty and submit answers of low quality. Currently, most requesters rely on redundancy to identify the correct answers. However, redundancy is not a panacea. Massive redundancy is expensive, increasing significantly the cost of crowdsourced solutions. Therefore, we need techniques that will accurately estimate the quality of the workers, allowing for the rejection and blocking of the low-performing workers and spammers.However, existing techniques cannot separate the true (unrecoverable) error rate from the (recoverable) biases that some workers exhibit. This lack of separation leads to incorrect assessments of a worker's quality. We present algorithms that improve the existing state-of-the-art techniques, enabling the separation of bias and error. Our algorithm generates a scalar score representing the inherent quality of each worker. We illustrate how to incorporate cost-sensitive classification errors in the overall framework and how to seamlessly integrate unsupervised and supervised techniques for inferring the quality of the workers. We present experimental results demonstrating the performance of the proposed algorithm under a variety of settings.

957 citations


Journal ArticleDOI
TL;DR: A new concept has emerged that is changing the way the business world operates and many research and development (R&D) problems in a particular area are being solved.
Abstract: By Jeff Howe, Published by the Crown Publishing Group, a division of Random House, Inc., 1745 Broadway, New York, NY 10019, 2008. vii + 311 p. Price $27. A new concept has emerged that is changing the way the business world operates. Many research and development (R&D) problems in a particular

873 citations


Journal ArticleDOI
TL;DR: Geographic information created by amateur citizens, often known as volunteered geographic information, has recently provided an interesting alternative to traditional authoritative information from mapping agencies and corporations, and several recent papers have provided the beginnings of a literature on the more fundamental issues raised by this new source.
Abstract: Geographic data and tools are essential in all aspects of emergency management: preparedness, response, recovery, and mitigation. Geographic information created by amateur citizens, often known as volunteered geographic information, has recently provided an interesting alternative to traditional authoritative information from mapping agencies and corporations, and several recent papers have provided the beginnings of a literature on the more fundamental issues raised by this new source. Data quality is a major concern, since volunteered information is asserted and carries none of the assurances that lead to trust in officially created data. During emergencies time is the essence, and the risks associated with volunteered information are often outweighed by the benefits of its use. An example is discussed using the four wildfires that impacted the Santa Barbara area in 2007–2009, and lessons are drawn.

824 citations


Proceedings ArticleDOI
03 Oct 2010
TL;DR: S soylent, a word processing interface that enables writers to call on Mechanical Turk workers to shorten, proofread, and otherwise edit parts of their documents on demand, and the Find-Fix-Verify crowd programming pattern, which splits tasks into a series of generation and review stages.
Abstract: This paper introduces architectural and interaction patterns for integrating crowdsourced human contributions directly into user interfaces. We focus on writing and editing, complex endeavors that span many levels of conceptual and pragmatic activity. Authoring tools offer help with pragmatics, but for higher-level help, writers commonly turn to other people. We thus present Soylent, a word processing interface that enables writers to call on Mechanical Turk workers to shorten, proofread, and otherwise edit parts of their documents on demand. To improve worker quality, we introduce the Find-Fix-Verify crowd programming pattern, which splits tasks into a series of generation and review stages. Evaluation studies demonstrate the feasibility of crowdsourced editing and investigate questions of reliability, cost, wait time, and work time for edits.

814 citations


Proceedings ArticleDOI
10 Apr 2010
TL;DR: The viability of Amazon's Mechanical Turk as a platform for graphical perception experiments is assessed and cost and performance data are reported and recommendations for the design of crowdsourced studies are distill.
Abstract: Understanding perception is critical to effective visualization design. With its low cost and scalability, crowdsourcing presents an attractive option for evaluating the large design space of visualizations; however, it first requires validation. In this paper, we assess the viability of Amazon's Mechanical Turk as a platform for graphical perception experiments. We replicate previous studies of spatial encoding and luminance contrast and compare our results. We also conduct new experiments on rectangular area perception (as in treemaps or cartograms) and on chart size and gridline spacing. Our results demonstrate that crowdsourced perception experiments are viable and contribute new insights for visualization design. Lastly, we report cost and performance data from our experiments and distill recommendations for the design of crowdsourced studies.

758 citations


Journal ArticleDOI
TL;DR: In this paper, an associate professor at New York University's Stern School of Business uncovers answers about who are the employers in paid crowdsourcing, what tasks they post, and how much they pay.
Abstract: An associate professor at New York Universitys Stern School of Business uncovers answers about who are the employers in paid crowdsourcing, what tasks they post, and how much they pay.

750 citations


Journal ArticleDOI
TL;DR: In this article, the authors outline the ways in which information technologies (ITs) were used in the Haiti relief effort, especially with respect to web-based mapping services, focusing on four in particular: CrisisCamp Haiti, OpenStreetMap, Ushahidi and GeoCommons.
Abstract: This paper outlines the ways in which information technologies (ITs) were used in the Haiti relief effort, especially with respect to web-based mapping services. Although there were numerous ways in which this took place, this paper focuses on four in particular: CrisisCamp Haiti, OpenStreetMap, Ushahidi, and GeoCommons. This analysis demonstrates that ITs were a key means through which individuals could make a tangible difference in the work of relief and aid agencies without actually being physically present in Haiti. While not without problems, this effort nevertheless represents a remarkable example of the power and crowdsourced online mapping and the potential for new avenues of interaction between physically distant places that vary tremendously.

697 citations


Journal ArticleDOI
TL;DR: In this article, the authors outline the ways in which information technologies (ITs) were used in the Haiti relief effort, especially with respect to web-based mapping services, focusing on four in particular: CrisisCamp Haiti, OpenStreetMap, Ushahidi and GeoCommons.
Abstract: This paper outlines the ways in which information technologies (ITs) were used in the Haiti relief effort, especially with respect to web-based mapping services. Although there were numerous ways in which this took place, this paper focuses on four in particular: CrisisCamp Haiti, OpenStreetMap, Ushahidi, and GeoCommons. This analysis demonstrates that ITs were a key means through which individuals could make a tangible difference in the work of relief and aid agencies without actually being physically present in Haiti. While not without problems, this effort nevertheless represents a remarkable example of the power and crowdsourced online mapping and the potential for new avenues of interaction between physically distant places that vary tremendously.

568 citations


Posted Content
TL;DR: The authors presented a model of workers supplying labor to paid crowdsourcing projects and introduced a method for estimating a worker's reservation wage, the smallest wage a worker is willing to accept for a task and the key parameter in their labor supply model.
Abstract: Crowdsourcing is a form of "peer production" in which work traditionally performed by an employee is outsourced to an "undefined, generally large group of people in the form of an open call." We present a model of workers supplying labor to paid crowdsourcing projects. We also introduce a novel method for estimating a worker's reservation wage--the smallest wage a worker is willing to accept for a task and the key parameter in our labor supply model. It shows that the reservation wages of a sample of workers from Amazon's Mechanical Turk (AMT) are approximately log normally distributed, with a median wage of $1.38/hour. At the median wage, the point elasticity of extensive labor supply is 0.43. We discuss how to use our calibrated model to make predictions in applied work. Two experimental tests of the model show that many workers respond rationally to offered incentives. However, a non-trivial fraction of subjects appear to set earnings targets. These "target earners" consider not just the offered wage--which is what the rational model predicts--but also their proximity to earnings goals. Interestingly, a number of workers clearly prefer earning total amounts evenly divisible by 5, presumably because these amounts make good targets.

550 citations


Journal ArticleDOI
TL;DR: This study develops a more complete composite of what motivates the crowd to participate in crowdsourcing applications generally, information crucial to adapt the crowdsourcing model to new forms of problem-solving.
Abstract: Crowdsourcing is an online, distributed problem-solving and production model already in use by businesses such as Threadless.com, iStockphoto.com, and InnoCentive.com. This model, which harnesses the collective intelligence of a crowd of Web users through an open-call format, has the potential for government and non-profit applications. Yet, in order to explore new applications for the crowdsourcing model, there must be a better understanding of why crowds participate in crowdsourcing processes. Based on 17 interviews conducted via instant messenger with members of the crowd at Threadless, the present study adds qualitatively rich data on a new crowdsourcing case to an existing body of quantitative data on motivations for participation in crowdsourcing. Four primary motivators for participation at Threadless emerge from these interview data: the opportunity to make money, the opportunity to develop one's creative skills, the potential to take up freelance work, and the love of community at Threadless. A f...

Proceedings ArticleDOI
10 Apr 2010
TL;DR: A screening process used in conjunction with a survey administered via Amazon.com's Mechanical Turk identified 764 of 1,962 people who did not answer conscientiously and seemed to be most likely to fail the qualification task.
Abstract: In this paper we discuss a screening process used in conjunction with a survey administered via Amazon.com's Mechanical Turk. We sought an easily implementable method to disqualify those people who participate but don't take the study tasks seriously. By using two previously pilot tested screening questions, we identified 764 of 1,962 people who did not answer conscientiously. Young men seem to be most likely to fail the qualification task. Those that are professionals, students, and non-workers seem to be more likely to take the task seriously than financial workers, hourly workers, and other workers. Men over 30 and women were more likely to answer seriously.

Journal ArticleDOI
TL;DR: While traditional mapping is nearly exclusively coordinated and often also carried out by large organisations, crowdsourcing geospatial data refers to generating a map using informal social networks and web 2.0 technology.
Abstract: In this paper we review recent developments of crowdsourcing geospatial data. While traditional mapping is nearly exclusively coordinated and often also carried out by large organisations, crowdsourcing geospatial data refers to generating a map using informal social networks and web 2.0 technology. Key differences are the fact that users lacking formal training in map making create the geospatial data themselves rather than relying on professional services; that potentially very large user groups collaborate voluntarily and often without financial compensation with the result that at a very low monetary cost open datasets become available and that mapping and change detection occur in real time. This situation is similar to that found in the Open Source software environment. We shortly explain the basic technology needed for crowdsourcing geospatial data, discuss the underlying concepts including quality issues and give some examples for this novel way of generating geospatial data. We also point at applications where alternatives do not exist such as life traffic information systems. Finally we explore the future of crowdsourcing geospatial data and give some concluding remarks.

Proceedings ArticleDOI
13 Jun 2010
TL;DR: A model of the labeling process which includes label uncertainty, as well a multi-dimensional measure of the annotators' ability is proposed, from which an online algorithm is derived that estimates the most likely value of the labels and the annotator abilities.
Abstract: Labeling large datasets has become faster, cheaper, and easier with the advent of crowdsourcing services like Amazon Mechanical Turk. How can one trust the labels obtained from such services? We propose a model of the labeling process which includes label uncertainty, as well a multi-dimensional measure of the annotators' ability. From the model we derive an online algorithm that estimates the most likely value of the labels and the annotator abilities. It finds and prioritizes experts when requesting labels, and actively excludes unreliable annotators. Based on labels already obtained, it dynamically chooses which images will be labeled next, and how many labels to request in order to achieve a desired level of confidence. Our algorithm is general and can handle binary, multi-valued, and continuous annotations (e.g. bounding boxes). Experiments on a dataset containing more than 50,000 labels show that our algorithm reduces the number of labels required, and thus the total cost of labeling, by a large factor while keeping error rates low on a variety of datasets.

Proceedings ArticleDOI
29 Mar 2010
TL;DR: The majority vote applied to generate one annotation set out of several opinions, is able to filter noisy judgments of non-experts to some extent and the resulting annotation set is of comparable quality to the annotations of experts.
Abstract: The creation of golden standard datasets is a costly business. Optimally more than one judgment per document is obtained to ensure a high quality on annotations. In this context, we explore how much annotations from experts differ from each other, how different sets of annotations influence the ranking of systems and if these annotations can be obtained with a crowdsourcing approach. This study is applied to annotations of images with multiple concepts. A subset of the images employed in the latest ImageCLEF Photo Annotation competition was manually annotated by expert annotators and non-experts with Mechanical Turk. The inter-annotator agreement is computed at an image-based and concept-based level using majority vote, accuracy and kappa statistics. Further, the Kendall τ and Kolmogorov-Smirnov correlation test is used to compare the ranking of systems regarding different ground-truths and different evaluation measures in a benchmark scenario. Results show that while the agreement between experts and non-experts varies depending on the measure used, its influence on the ranked lists of the systems is rather small. To sum up, the majority vote applied to generate one annotation set out of several opinions, is able to filter noisy judgments of non-experts to some extent. The resulting annotation set is of comparable quality to the annotations of experts.

Proceedings Article
06 Jun 2010
TL;DR: This paper describes the experience using both Amazon Mechanical Turk (MTurk) and Crowd-Flower to collect simple named entity annotations for Twitter status updates, and describes how to use MTurk to collect judgements on the quality of "word clouds."
Abstract: We describe our experience using both Amazon Mechanical Turk (MTurk) and Crowd-Flower to collect simple named entity annotations for Twitter status updates. Unlike most genres that have traditionally been the focus of named entity experiments, Twitter is far more informal and abbreviated. The collected annotations and annotation techniques will provide a first step towards the full study of named entity recognition in domains like Facebook and Twitter. We also briefly describe how to use MTurk to collect judgements on the quality of "word clouds."

Proceedings Article
06 Jun 2010
TL;DR: An introduction to using Amazon's Mechanical Turk crowdsourcing platform for the purpose of collecting data for human language technologies is given.
Abstract: In this paper we give an introduction to using Amazon's Mechanical Turk crowdsourcing platform for the purpose of collecting data for human language technologies. We survey the papers published in the NAACL-2010 Workshop. 24 researchers participated in the workshop's shared task to create data for speech and language applications with $100.

Proceedings ArticleDOI
16 Oct 2010
TL;DR: Overall the findings show that integrating tasks in the physical world is useful and feasible and issues that should be considered during designing mobile crowdsourcing applications are discussed.
Abstract: The WWW and the mobile phone have become an essential means for sharing implicitly and explicitly generated information and a communication platform for many people. With the increasing ubiquity of location sensing included in mobile devices we investigate the arising opportunities for mobile crowdsourcing making use of the real world context. In this paper we assess how the idea of user-generated content, web-based crowdsourcing, and mobile electronic coordination can be combined to extend crowdsourcing beyond the digital domain and link it to tasks in the real world. To explore our concept we implemented a crowd-sourcing platform that integrates location as a parameter for distributing tasks to workers. In the paper we describe the concept and design of the platform and discuss the results of two user studies. Overall the findings show that integrating tasks in the physical world is useful and feasible. We observed that (1) mobile workers prefer to pull tasks rather than getting them pushed, (2) requests for pictures were the most favored tasks, and (3) users tended to solve tasks mainly in close proximity to their homes. Based on this, we discuss issues that should be considered during designing mobile crowdsourcing applications.

01 Jan 2010
TL;DR: It is concluded that in a relevance categorization task, a uniform distribution of labels across training data labels produces optimal peaks in 1) individual worker precision and 2) majority voting aggregate result accuracy.
Abstract: The use of crowdsourcing platforms like Amazon Mechanical Turk for evaluating the relevance of search results has become an effective strategy that yields results quickly and inexpensively. One approach to ensure quality of worker judgments is to include an initial training period and subsequent sporadic insertion of predefined gold standard data (training data). Workers are notified or rejected when they err on the training data, and trust and quality ratings are adjusted accordingly. In this paper, we assess how this type of dynamic learning environment can affect the workers’ results in a search relevance evaluation task completed on Amazon Mechanical Turk. Specifically, we show how the distribution of training set answers impacts training of workers and aggregate quality of worker results. We conclude that in a relevance categorization task, a uniform distribution of labels across training data labels produces optimal peaks in 1) individual worker precision and 2) majority voting aggregate result accuracy.

Proceedings ArticleDOI
26 Apr 2010
TL;DR: It is found that individual specific traits together with the project payment and the number of project requirements are significant predictors of the final project quality, and significant evidence of strategic behavior of contestants is found.
Abstract: Crowdsourcing is a new Web phenomenon, in which a firm takes a function once performed in-house and outsources it to a crowd, usually in the form of an open contest.Designing efficient crowdsourcing mechanisms is not possible without deep understanding of incentives and strategic choices of all participants.This paper presents an empirical analysis of determinants of individual performance in multiple simultaneous crowdsourcing contests using a unique dataset for the world's largest competitive software development portal: TopCoder.com. Special attention is given to studying the effects of the reputation system currently used by TopCoder.com on behavior of contestants. We find that individual specific traits together with the project payment and the number of project requirements are significant predictors of the final project quality. Furthermore, we find significant evidence of strategic behavior of contestants. High rated contestants face tougher competition from their opponents in the competition phase of the contest. In order to soften the competition, they move first in the registration phase of the contest, signing up early for particular projects. Although registration in TopCoder contests is non-binding, it deters entry of opponents in the same contest; our lower bound estimate shows that this strategy generates significant surplus gain to high rated contestants. We conjecture that the reputation + cheap talk mechanism employed by TopCoder has a positive effect on allocative efficiency of simultaneous all-pay contests and should be considered for adoption in other crowdsourcing platforms.


Proceedings Article
06 Jun 2010
TL;DR: While results are largely inconclusive, they identify important obstacles encountered, lessons learned, related work, and interesting ideas for future investigation.
Abstract: We investigate human factors involved in designing effective Human Intelligence Tasks (HITs) for Amazon's Mechanical Turk. In particular, we assess document relevance to search queries via MTurk in order to evaluate search engine accuracy. Our study varies four human factors and measures resulting experimental outcomes of cost, time, and accuracy of the assessments. While results are largely inconclusive, we identify important obstacles encountered, lessons learned, related work, and interesting ideas for future investigation. Experimental data is also made publicly available for further study by the community.

26 Sep 2010
TL;DR: This paper seeks to provide a low cost solution for citizens to measure their personal exposure to noise in their everyday environment and participate in the creation of collective noise maps by sharing their geo-localized and annotated measurements with the community.
Abstract: In this paper we present our research into participatory sensing based solutions for the collection of data on urban pollution and nuisance. In the past 2 years we have been involved in the NoiseTube project which explores a crowdsourcing approach to measuring and mapping urban noise pollution using smartphones. By involving the general public and using off-the-shelf smartphones as noise sensors, we seek to provide a low cost solution for citizens to measure their personal exposure to noise in their everyday environment and participate in the creation of collective noise maps by sharing their geo-localized and annotated measurements with the community. We believe our work represents an interesting example of the novel mobile crowdsourcing applications which are enabled by ubiquitous computing systems. Furthermore we believe the NoiseTube system, and the currently ongoing validation experiments, provide an illustrative context for some of the open challenges faced by creators of ubiquitous crowdsourcing applications and services in general. We will also take the opportunity to present the insights we gained into some of the challenges.

Proceedings Article
06 Jun 2010
TL;DR: A compendium of recent and current projects that utilize crowdsourcing technologies for language studies is presented, finding that the quality is comparable to controlled laboratory experiments, and in some cases superior.
Abstract: We present a compendium of recent and current projects that utilize crowdsourcing technologies for language studies, finding that the quality is comparable to controlled laboratory experiments, and in some cases superior. While crowdsourcing has primarily been used for annotation in recent language studies, the results here demonstrate that far richer data may be generated in a range of linguistic disciplines from semantics to psycholinguistics. For these, we report a number of successful methods for evaluating data quality in the absence of a 'correct' response for any given data point.

Journal ArticleDOI
TL;DR: While many organizations turn to human computation labor markets for jobs with black-or-white solutions, there is vast potential in asking these workers for original thought and innovation.
Abstract: While many organizations turn to human computation labor markets for jobs with black-or-white solutions, there is vast potential in asking these workers for original thought and innovation.

Anne C. Rouse1
01 Jan 2010
TL;DR: In this paper the notion of crowdsourcing is decomposed to create a taxonomy that expands the understanding of what is meant by the term, and focuses on the different capability levels of crowdsourced suppliers; different motivations; and different allocation of benefits.
Abstract: Many firms are now asking how they can benefit from the new form of outsourcing labelled “crowdsourcing”. Like many other forms of outsourcing, crowdsourcing is now being “talked up” by a somewhat credulous trade press. However, the term crowdsourcing has been used to describe several related, but different phenomena, and what might be successful with one form of crowdsourcing may not be with another. In this paper the notion of crowdsourcing is decomposed to create a taxonomy that expands our understanding of what is meant by the term. This taxonomy focuses on the different capability levels of crowdsourcing suppliers; different motivations; and different allocation of benefits. The management implications of these distinctions are then considered in light of what we know about other forms of outsourcing.

Posted Content
TL;DR: TopCoder as discussed by the authors is a crowdsourcing-based business model, in which software is developed through online tournaments, and the challenges of building a community and refining a web-based competition platform are illustrated.
Abstract: TopCoder's crowdsourcing-based business model, in which software is developed through online tournaments, is presented. The case highlights how TopCoder has created a unique two-sided innovation platform consisting of a global community of over 225,,000 developers who compete to write software modules for its over 40 clients. Provides details of a unique innovation platform where complex software is developed through ongoing online competitions. By outlining the company's evolution, the challenges of building a community and refining a web-based competition platform are illustrated. Experiences and perspectives from TopCoder community members and clients help show what it means to work from within or in cooperation with an online community. In the case, the use of distributed innovation and its potential merits as a corporate problem solving mechanism is discussed. Issues related to TopCoder's scalability, profitability and growth are also explored.Learning Objective:To detail mechanisms of a platform focused on prize-based innovation for complex software projects involving over 225000 members. Issues of managing clients, community and firm employees are presented.

Journal ArticleDOI
TL;DR: The definition and purpose of crowdsourcing and its relevance to libraries is discussed with particular reference to the Australian Newspapers service, FamilySearch, Wikipedia, Distributed Proofreaders, Galaxy Zoo and The Guardian MP's Expenses Scandal.
Abstract: The definition and purpose of crowdsourcing and its relevance to libraries is discussed with particular reference to the Australian Newspapers service, FamilySearch, Wikipedia, Distributed Proofreaders, Galaxy Zoo and The Guardian MP's Expenses Scandal. These services have harnessed thousands of digital volunteers who transcribe, create, enhance and correct text, images and archives. Known facts about crowdsourcing are presented and helpful tips and strategies for libraries beginning to crowdsource are given.

Journal ArticleDOI
01 Oct 2010
TL;DR: In this article, the authors used a collaborative research approach to investigate the main strategic difficulties encountered by firms whose business models rely on public web communities to create value, where the knowledge produced is the result of collective creativity carried out by communities of peers.
Abstract: Recent literature on open innovation suggests that firms can improve their performance by “opening” their business models, in other words, they can reduce their R&D costs by incorporating external knowledge. This implies that firms will be able to capture value through knowledge produced outside the organization. This, however, presents a number of difficulties notably where the knowledge produced is the result of collective creativity carried out by communities of peers. Here, tension can arise when some of the business actors involved take, or attempt to obtain, financial benefit from part of the value created by the online communities. The purpose of this article is to address the following research question: what are the main strategic difficulties encountered by firms whose business models rely on public web communities to create value? Our study used a collaborative research approach, and our empirical data is based on the longitudinal strategic analysis of a web start-up, CrowdSpirit, a collaborative web-based platform which enables communities to imagine and design innovative products. Our research highlights three main points that need to be addressed in further research on open business models. First, we highlight the fact that the ‘openness’ of the business model to online communities leads to the development of a multi-level incentive model adapted to the different profiles of the various contributors. Second, we suggest that crowdsourcing platforms act as intermediaries in multi-sided markets and, as such, are at the core of a knowledge-sharing and IP transfer process between multiple actors. Finally, we suggest that the business model design and development can be considered as an ongoing learning process.

23 Dec 2010
TL;DR: In this article, the effects of rewards and motivation on the participation and performance of online community members were examined. But the results of these three studies resulted in a refined model of the effect of rewards on voluntary behavior.
Abstract: textCompanies increasingly outsource activities to volunteers that they approach via an open call on the internet. The phenomenon is called ‘crowdsourcing’. For an effective use of crowdsourcing it is important to understand what motivated these online volunteers and what is the influence of rewards. Therefore, this thesis examines the effects of motivation and rewards on the participation and performance of online community members. We studied motivation, rewards and contributions in three crowdsourcing initiatives that varied in reward systems. The findings of these three studies resulted in a refined model of the effects of rewards and motivation on voluntary behavior. With this model we provide a possible solution for contrary findings in empirical studies of online communities and the ongoing debate between two schools of cognitive psychology. Our results also have important implications for organizers of online communities, amongst others, regarding the effective application of reward systems. We also provide a crowdsourcing typology in which crowdsourcing initiatives are classified on the basis of their reward systems and identify the motivation profiles of optimal performers per crowdsourcing type.