scispace - formally typeset
Journal ArticleDOI

The Power of Bias in Economics Research

Reads0
Chats0
TLDR
The authors survey 159 empirical economics literatures that draw upon 64,076 estimates of economic parameters reported in more than 6,700 empirical studies to investigate two critical dimensions of the credibility of empirical economics research: statistical power and bias.
Abstract
We investigate two critical dimensions of the credibility of empirical economics research: statistical power and bias. We survey 159 empirical economics literatures that draw upon 64,076 estimates of economic parameters reported in more than 6,700 empirical studies. Half of the research areas have nearly 90% of their results under-powered. The median statistical power is 18%, or less. A simple weighted average of those reported results that are adequately powered (power ≥ 80%) reveals that nearly 80% of the reported effects in these empirical economics literatures are exaggerated; typically, by a factor of two and with one-third inflated by a factor of four or more.

read more

Citations
More filters
Posted Content

Meta-Regression Methods for Detecting and Estimating Empirical Effects in the Presence of Publication Selection

TL;DR: This study investigates the small‐sample performance of meta‐regression methods for detecting and estimating genuine empirical effects in research literatures tainted by publication selection and finds them to be robust against publication selection.
Journal ArticleDOI

Correcting for Bias in Psychology: A Comparison of Meta-Analytic Methods:

TL;DR: In this article, a variety of statistical approaches have been proposed for meta-analysis, and the results showed that publication bias and questionable research practices in primary research can lead to badly overestimated effects in meta analysis.
Journal ArticleDOI

What meta-analyses reveal about the replicability of psychological research.

TL;DR: The low power and high heterogeneity that the survey finds fully explain recent difficulties to replicate highly regarded psychological studies and reveal challenges for scientific progress in psychology.
Journal ArticleDOI

Neither fixed nor random: weighted least squares meta-regression.

TL;DR: It is shown how and why an unrestricted weighted least squares MRA (WLS-MRA) estimator is superior to conventional random-effects (or mixed-effects) meta-regression when there is publication (or small-sample) bias that is as good as FE-M RA in all cases and better than fixed effects in most practical applications.
Journal ArticleDOI

Aid, China, and Growth: Evidence from a New Global Development Finance Dataset

TL;DR: In this article, a new dataset of official financing from China to 138 countries between 2000 and 2014 was introduced to investigate whether and to what extent Chinese aid affects economic growth in recipient countries.
References
More filters
Journal ArticleDOI

Bias in meta-analysis detected by a simple, graphical test

TL;DR: Funnel plots, plots of the trials' effect estimates against sample size, are skewed and asymmetrical in the presence of publication bias and other biases Funnel plot asymmetry, measured by regression analysis, predicts discordance of results when meta-analyses are compared with single large trials.
Book

Data Mining

Ian Witten
TL;DR: In this paper, generalized estimating equations (GEE) with computing using PROC GENMOD in SAS and multilevel analysis of clustered binary data using generalized linear mixed-effects models with PROC LOGISTIC are discussed.
Book

Statistical Methods for Meta-Analysis

TL;DR: In this article, the authors present a model for estimating the effect size from a series of experiments using a fixed effect model and a general linear model, and combine these two models to estimate the effect magnitude.
Journal ArticleDOI

The file drawer problem and tolerance for null results

TL;DR: Quantitative procedures for computing the tolerance for filed and future null results are reported and illustrated, and the implications are discussed.
Journal ArticleDOI

Power failure: why small sample size undermines the reliability of neuroscience

TL;DR: It is shown that the average statistical power of studies in the neurosciences is very low, and the consequences include overestimates of effect size and low reproducibility of results.
Related Papers (5)

Estimating the reproducibility of psychological science

Alexander A. Aarts, +290 more
- 28 Aug 2015 -