scispace - formally typeset
Open AccessJournal ArticleDOI

False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant

Reads0
Chats0
TLDR
It is shown that despite empirical psychologists’ nominal endorsement of a low rate of false-positive findings, flexibility in data collection, analysis, and reporting dramatically increases actual false- positive rates, and a simple, low-cost, and straightforwardly effective disclosure-based solution is suggested.
Abstract
In this article, we accomplish two things. First, we show that despite empirical psychologists' nominal endorsement of a low rate of false-positive findings (≤ .05), flexibility in data collection, analysis, and reporting dramatically increases actual false-positive rates. In many cases, a researcher is more likely to falsely find evidence that an effect exists than to correctly find evidence that it does not. We present computer simulations and a pair of actual experiments that demonstrate how unacceptably easy it is to accumulate (and report) statistically significant evidence for a false hypothesis. Second, we suggest a simple, low-cost, and straightforwardly effective disclosure-based solution to this problem. The solution involves six concrete requirements for authors and four guidelines for reviewers, all of which impose a minimal burden on the publication process.

read more

Content maybe subject to copyright    Report

Citations
More filters
Posted Content

On making the Right Choice: A Meta-Analysis and Large-Scale Replication Attempt of the Unconscious Thought Advantage

TL;DR: This paper conducted a meta-analysis and a large-scale replication study (N = 399) that met the conditions deemed optimal for replicating the unconscious thought advantage (UTA) and concluded that there exists no reliable support for the claim that a momentary diversion of thought leads to better decision making than a period of deliberation.
Journal ArticleDOI

Nonreplicable publications are cited more than replicable ones

TL;DR: Publicly available data is used to show that published papers in top psychology, economics, and general interest journals that fail to replicate are cited more than those that replicate, even after the failure is published.
Journal ArticleDOI

Common misconceptions about data analysis and statistics

TL;DR: Ideally, any experienced investigator with the right tools should be able to reproduce a finding published in a peer-reviewed biomedical science journal, but many investigators fool themselves due to a poor understanding of statistical concepts.
Journal ArticleDOI

Unifying morality’s influence on non-moral judgments: The relevance of alternative possibilities

TL;DR: It is proposed that moral judgment influences the degree to which people regard certain alternative possibilities as relevant, which in turn impacts intuitions about freedom, causation, doing/allowing, and intentional action.
Journal ArticleDOI

Addressing the "Replication Crisis": Using Original Studies to Design Replication Studies with Appropriate Statistical Power

TL;DR: Simulation results imply that even if original studies reflect actual phenomena and were conducted in the absence of questionable research practices, popular approaches to designing replication studies may result in a low success rate, especially if the original study is underpowered.
References
More filters
Journal ArticleDOI

The case for motivated reasoning.

TL;DR: It is proposed that motivation may affect reasoning through reliance on a biased set of cognitive processes--that is, strategies for accessing, constructing, and evaluating beliefs--that are considered most likely to yield the desired conclusion.

Why Most Published Research Findings Are False

TL;DR: In this paper, the authors discuss the implications of these problems for the conduct and interpretation of research and suggest that claimed research findings may often be simply accurate measures of the prevailing bias.
Journal ArticleDOI

Group sequential methods in the design and analysis of clinical trials

TL;DR: In this article, a group sequential design is proposed to divide patient entry into a number of equal-sized groups so that the decision to stop the trial or continue is based on repeated significance tests of the accumulated data after each group is evaluated.
Journal ArticleDOI

Measuring the Prevalence of Questionable Research Practices With Incentives for Truth Telling

TL;DR: It is found that the percentage of respondents who have engaged in questionable practices was surprisingly high, which suggests that some questionable practices may constitute the prevailing research norm.
Journal ArticleDOI

Attribution of success and failure revisited, or: The motivational bias is alive and well in attribution theory

TL;DR: The authors found that self-serving effects for both success and failure are obtained in most but not all experimental paradigms, and that these attributions are better understood in motivational than in information-processing terms.
Related Papers (5)

Estimating the reproducibility of psychological science

Alexander A. Aarts, +290 more
- 28 Aug 2015 -