scispace - formally typeset
Open AccessJournal ArticleDOI

False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant

Reads0
Chats0
TLDR
It is shown that despite empirical psychologists’ nominal endorsement of a low rate of false-positive findings, flexibility in data collection, analysis, and reporting dramatically increases actual false- positive rates, and a simple, low-cost, and straightforwardly effective disclosure-based solution is suggested.
Abstract
In this article, we accomplish two things. First, we show that despite empirical psychologists' nominal endorsement of a low rate of false-positive findings (≤ .05), flexibility in data collection, analysis, and reporting dramatically increases actual false-positive rates. In many cases, a researcher is more likely to falsely find evidence that an effect exists than to correctly find evidence that it does not. We present computer simulations and a pair of actual experiments that demonstrate how unacceptably easy it is to accumulate (and report) statistically significant evidence for a false hypothesis. Second, we suggest a simple, low-cost, and straightforwardly effective disclosure-based solution to this problem. The solution involves six concrete requirements for authors and four guidelines for reviewers, all of which impose a minimal burden on the publication process.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

Power failure: why small sample size undermines the reliability of neuroscience

TL;DR: It is shown that the average statistical power of studies in the neurosciences is very low, and the consequences include overestimates of effect size and low reproducibility of results.
Journal ArticleDOI

Estimating the reproducibility of psychological science

Alexander A. Aarts, +290 more
- 28 Aug 2015 - 
TL;DR: A large-scale assessment suggests that experimental reproducibility in psychology leaves a lot to be desired, and correlational tests suggest that replication success was better predicted by the strength of original evidence than by characteristics of the original and replication teams.
Journal ArticleDOI

The ASA's Statement on p-Values: Context, Process, and Purpose

TL;DR: The American Statistical Association (ASA) released a policy statement on p-values and statistical significance in 2015 as discussed by the authors, which was based on a discussion with the ASA Board of Trustees and concerned with reproducibility and replicability of scientific conclusions.
Journal ArticleDOI

Detecting outliers: Do not use standard deviation around the mean, use absolute deviation around the median

TL;DR: In this article, the authors highlight the disadvantages of this method and present the median absolute deviation, an alternative and more robust measure of dispersion that is easy to implement, and explain the procedures for calculating this indicator in SPSS and R software.
Journal ArticleDOI

Science faculty’s subtle gender biases favor male students

TL;DR: In a randomized double-blind study, science faculty from research-intensive universities rated the application materials of a student as significantly more competent and hireable than the (identical) female applicant, and preexisting subtle bias against women played a moderating role.
References
More filters
Journal ArticleDOI

Explaining Bargaining Impasse: The Role of Self-Serving Biases

TL;DR: In this article, the authors review studies conducted by themselves and coauthors that document a "self-serving" bias in judgments of fairness and demonstrate that the bias is an important cause of impasse in negotiations.
Journal ArticleDOI

Why psychologists must change the way they analyze their data: the case of psi: comment on Bem (2011).

TL;DR: It is concluded that Bem's p values do not indicate evidence in favor of precognition; instead, they indicate that experimental psychologists need to change the way they conduct their experiments and analyze their data.
Journal Article

Why Most Published Research Findings Are False

TL;DR: In this paper, the authors discuss the implications of these problems for the conduct and interpretation of research and suggest that claimed research findings may often be simply accurate measures of the prevailing bias.
Journal ArticleDOI

Biased evaluation and persistence in gambling.

TL;DR: This paper found that gamblers tend to accept wins at face value but explain away or discount losses, and recall their losses better during a recall test 3 weeks later than those who had won.
Related Papers (5)

Estimating the reproducibility of psychological science

Alexander A. Aarts, +290 more
- 28 Aug 2015 -