scispace - formally typeset
Open AccessJournal ArticleDOI

The viability of crowdsourcing for survey research.

Reads0
Chats0
TLDR
It is concluded that the use of these labor portals is an efficient and appropriate alternative to a university participant pool, despite small differences in personality and socially desirable responding across the samples.
Abstract
Online contract labor portals (i.e., crowdsourcing) have recently emerged as attractive alternatives to university participant pools for the purposes of collecting survey data for behavioral research. However, prior research has not provided a thorough examination of crowdsourced data for organizational psychology research. We found that, as compared with a traditional university participant pool, crowdsourcing respondents were older, were more ethnically diverse, and had more work experience. Additionally, the reliability of the data from the crowdsourcing sample was as good as or better than the corresponding university sample. Moreover, measurement invariance generally held across these groups. We conclude that the use of these labor portals is an efficient and appropriate alternative to a university participant pool, despite small differences in personality and socially desirable responding across the samples. The risks and advantages of crowdsourcing are outlined, and an overview of practical and ethical guidelines is provided.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

Inside the Turk Understanding Mechanical Turk as a Participant Pool

TL;DR: The characteristics of Mechanical Turk as a participant pool for psychology and other social sciences, highlighting the traits of the MTurk samples, why people become Mechanical Turk workers and research participants, and how data quality on Mechanical Turk compares to that from other pools and depends on controllable and uncontrollable factors as mentioned in this paper.
Journal ArticleDOI

Nonnaïveté among Amazon Mechanical Turk workers: consequences and solutions for behavioral researchers

TL;DR: It is shown that crowdsourced workers are likely to participate across multiple related experiments and that researchers are overzealous in the exclusion of research participants, which can be avoided using advanced interface features that also allow prescreening and longitudinal data collection.
Journal ArticleDOI

Conducting Clinical Research Using Crowdsourced Convenience Samples

TL;DR: This article addresses methodological issues with using MTurk--many of which are common to other nonprobability samples but unfamiliar to clinical science researchers--and suggests concrete steps to avoid these issues or minimize their impact.
Journal ArticleDOI

Detecting and Deterring Insufficient Effort Responding to Surveys

TL;DR: In this article, the authors summarize existing approaches to detect insufficient effort responding (IER) to low-stakes surveys and comprehensively evaluate these approaches and provide convergent validity evidence regarding various indices for IER.
Journal ArticleDOI

An Inconvenient Truth: Arbitrary Distinctions Between Organizational, Mechanical Turk, and Other Convenience Samples

TL;DR: The authors argue that sampling is better understood in methodological terms of range restriction and omitted variables bias, which has far-reaching implications because in industrial-organizational (I-O) psychology, as in most social sciences, virtually all of the samples are convenience samples.
References
More filters
Book

Statistical Power Analysis for the Behavioral Sciences

TL;DR: The concepts of power analysis are discussed in this paper, where Chi-square Tests for Goodness of Fit and Contingency Tables, t-Test for Means, and Sign Test are used.
Journal ArticleDOI

A new scale of social desirability independent of psychopathology.

TL;DR: It seems clear that the items in the Edwards Social Desirability Scale would, of necessity, have extreme social desirability scale positions or, in other words, be statistically deviant.
Book

The WEIRDest People in the World

TL;DR: A review of the comparative database from across the behavioral sciences suggests both that there is substantial variability in experimental results across populations and that WEIRD subjects are particularly unusual compared with the rest of the species – frequent outliers.
Journal ArticleDOI

Estimation of latent ability using a response pattern of graded scores

TL;DR: In this article, the authors considered the problem of estimating latent ability using the entire response pattern of free-response items, first in the general case and then in the case where the items are scored in a graded way, especially when the thinking process required for solving each item is assumed to be homogeneous.
Related Papers (5)