scispace - formally typeset
Open AccessJournal ArticleDOI

Replication, Communication, and the Population Dynamics of Scientific Discovery

Reads0
Chats0
TLDR
A mathematical model of scientific discovery that combines hypothesis formation, replication, publication bias, and variation in research quality is developed and it is found that communication of negative replications may aid true discovery even when attempts to replicate have diminished power.
Abstract
Many published research results are false (Ioannidis, 2005), and controversy continues over the roles of replication and publication policy in improving the reliability of research. Addressing these problems is frustrated by the lack of a formal framework that jointly represents hypothesis formation, replication, publication bias, and variation in research quality. We develop a mathematical model of scientific discovery that combines all of these elements. This model provides both a dynamic model of research as well as a formal framework for reasoning about the normative structure of science. We show that replication may serve as a ratchet that gradually separates true hypotheses from false, but the same factors that make initial findings unreliable also make replications unreliable. The most important factors in improving the reliability of research are the rate of false positives and the base rate of true hypotheses, and we offer suggestions for addressing each. Our results also bring clarity to verbal debates about the communication of research. Surprisingly, publication bias is not always an obstacle, but instead may have positive impacts—suppression of negative novel findings is often beneficial. We also find that communication of negative replications may aid true discovery even when attempts to replicate have diminished power. The model speaks constructively to ongoing debates about the design and conduct of science, focusing analysis and discussion on precise, internally consistent models, as well as highlighting the importance of population dynamics.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

The natural selection of bad science.

TL;DR: In this paper, the authors present a 60-year meta-analysis of statistical power in the behavioural sciences and show that power has not improved despite repeated demonstrations of the necessity of increasing power.
Journal ArticleDOI

The Natural Selection of Bad Science

TL;DR: A 60-year meta-analysis of statistical power in the behavioural sciences is presented and it is shown that power has not improved despite repeated demonstrations of the necessity of increasing power, and that replication slows but does not stop the process of methodological deterioration.
Journal ArticleDOI

“Fake News” Is Not Simply False Information: A Concept Explication and Taxonomy of Online Content:

TL;DR: An explication of “fake news” that, as a concept, has ballooned to include more than simply false information, with partisans weaponizing it to cast aspersions on the veracity of claims made by those who are politically opposed to them is conducted.
Journal ArticleDOI

Publication bias and the canonization of false facts.

TL;DR: It is found that unless a sufficient fraction of negative results are published, false claims frequently can become canonized as fact and true and false claims would be more readily distinguished.
Book ChapterDOI

Models Are Stupid, and We Need More of Them

TL;DR: The problem with most verbal models is that there are many ways to specify the parts and relationships of a system that are consistent with such a model, and this has the potential disadvantage of making formal models appear stupid as discussed by the authors.
References
More filters
Journal ArticleDOI

The file drawer problem and tolerance for null results

TL;DR: Quantitative procedures for computing the tolerance for filed and future null results are reported and illustrated, and the implications are discussed.
Journal ArticleDOI

Power failure: why small sample size undermines the reliability of neuroscience

TL;DR: It is shown that the average statistical power of studies in the neurosciences is very low, and the consequences include overestimates of effect size and low reproducibility of results.

Why Most Published Research Findings Are False

TL;DR: In this paper, the authors discuss the implications of these problems for the conduct and interpretation of research and suggest that claimed research findings may often be simply accurate measures of the prevailing bias.
Journal ArticleDOI

False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant

TL;DR: It is shown that despite empirical psychologists’ nominal endorsement of a low rate of false-positive findings, flexibility in data collection, analysis, and reporting dramatically increases actual false- positive rates, and a simple, low-cost, and straightforwardly effective disclosure-based solution is suggested.
Book

Conjectures and Refutations: The Growth of Scientific Knowledge

Karl Popper
TL;DR: A collection of classic essays written throughout Popper's illustrious career, expounding and defending his 'fallibilist' theory of knowledge and scientific discovery.
Related Papers (5)

Estimating the reproducibility of psychological science

Alexander A. Aarts, +290 more
- 28 Aug 2015 -