scispace - formally typeset
Open AccessJournal ArticleDOI

False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant

Reads0
Chats0
TLDR
It is shown that despite empirical psychologists’ nominal endorsement of a low rate of false-positive findings, flexibility in data collection, analysis, and reporting dramatically increases actual false- positive rates, and a simple, low-cost, and straightforwardly effective disclosure-based solution is suggested.
Abstract
In this article, we accomplish two things. First, we show that despite empirical psychologists' nominal endorsement of a low rate of false-positive findings (≤ .05), flexibility in data collection, analysis, and reporting dramatically increases actual false-positive rates. In many cases, a researcher is more likely to falsely find evidence that an effect exists than to correctly find evidence that it does not. We present computer simulations and a pair of actual experiments that demonstrate how unacceptably easy it is to accumulate (and report) statistically significant evidence for a false hypothesis. Second, we suggest a simple, low-cost, and straightforwardly effective disclosure-based solution to this problem. The solution involves six concrete requirements for authors and four guidelines for reviewers, all of which impose a minimal burden on the publication process.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

The Impact of Word Prevalence on Lexical Decision Times: Evidence From the Dutch Lexicon Project 2

TL;DR: It is argued that word prevalence is likely to be the most important new variable protecting researchers against experimenter bias in selecting stimulus materials and the unique variance it contributes to lexical decision times is higher than that of the other variables.
Journal ArticleDOI

We need to talk about reliability: making better use of test-retest studies for study design and interpretation.

TL;DR: A new method and tools for using summary statistics from previously published test-retest studies to approximate the reliability of outcomes in new samples will allow researchers to avoid performing costly studies which are, by virtue of their design, unlikely to yield informative conclusions.
Journal ArticleDOI

Statistical conclusion validity: some common threats and simple remedies.

TL;DR: Evidence of three common threats to SCV that arise from widespread recommendations or practices in data analysis are discussed, namely, the use of repeated testing and optional stopping without control of Type-I error rates, the recommendation to check the assumptions of statistical tests, and theUse of regression whenever a bivariate relation or the equivalence between two variables is studied.
Journal ArticleDOI

False Positives and Other Statistical Errors in Standard Analyses of Eye Movements in Reading.

TL;DR: A computational investigation of the various types of statistical errors than can occur in studies of reading behavior using Monte Carlo simulations shows that, contrary to conventional wisdom, false positives are increased to unacceptable levels when no corrections are applied.
Journal ArticleDOI

Median splits, Type II errors, and false–positive consumer psychology: Don't fight the power

TL;DR: It is shown that there are no real benefits to median splits, and there are real costs in increases in Type II errors through loss of power and increases inType I errors through false–positive consumer psychology.
References
More filters
Journal ArticleDOI

The case for motivated reasoning.

TL;DR: It is proposed that motivation may affect reasoning through reliance on a biased set of cognitive processes--that is, strategies for accessing, constructing, and evaluating beliefs--that are considered most likely to yield the desired conclusion.

Why Most Published Research Findings Are False

TL;DR: In this paper, the authors discuss the implications of these problems for the conduct and interpretation of research and suggest that claimed research findings may often be simply accurate measures of the prevailing bias.
Journal ArticleDOI

Group sequential methods in the design and analysis of clinical trials

TL;DR: In this article, a group sequential design is proposed to divide patient entry into a number of equal-sized groups so that the decision to stop the trial or continue is based on repeated significance tests of the accumulated data after each group is evaluated.
Journal ArticleDOI

Measuring the Prevalence of Questionable Research Practices With Incentives for Truth Telling

TL;DR: It is found that the percentage of respondents who have engaged in questionable practices was surprisingly high, which suggests that some questionable practices may constitute the prevailing research norm.
Journal ArticleDOI

Attribution of success and failure revisited, or: The motivational bias is alive and well in attribution theory

TL;DR: The authors found that self-serving effects for both success and failure are obtained in most but not all experimental paradigms, and that these attributions are better understood in motivational than in information-processing terms.
Related Papers (5)

Estimating the reproducibility of psychological science

Alexander A. Aarts, +290 more
- 28 Aug 2015 -