scispace - formally typeset
Journal ArticleDOI

Sample-Size Planning for More Accurate Statistical Power: A Method Adjusting Sample Effect Sizes for Publication Bias and Uncertainty.

Samantha F. Anderson, +2 more
- 13 Sep 2017 - 
- Vol. 28, Iss: 11, pp 1547-1562
TLDR
This work presents an alternative approach that adjusts sample effect sizes for bias and uncertainty, and it is demonstrated its effectiveness for several experimental designs.
Abstract
The sample size necessary to obtain a desired level of statistical power depends in part on the population value of the effect size, which is, by definition, unknown. A common approach to sample-size planning uses the sample effect size from a prior study as an estimate of the population value of the effect to be detected in the future study. Although this strategy is intuitively appealing, effect-size estimates, taken at face value, are typically not accurate estimates of the population effect size because of publication bias and uncertainty. We show that the use of this approach often results in underpowered studies, sometimes to an alarming degree. We present an alternative approach that adjusts sample effect sizes for bias and uncertainty, and we demonstrate its effectiveness for several experimental designs. Furthermore, we discuss an open-source R package, BUCSS, and user-friendly Web applications that we have made available to researchers so that they can easily implement our suggested methods.

read more

Citations
More filters
Journal ArticleDOI

How Many Participants Do We Have to Include in Properly Powered Experiments? A Tutorial of Power Analysis with Reference Tables.

TL;DR: In this article, the authors describe reference numbers needed for the designs most often used by psychologists, including single-variable between-groups and repeated-measures designs with two and three levels, two-factor designs involving two repeated measures and one repeated measure, and split-plot design.
Posted ContentDOI

Sample Size Justification

Daniel Lakens
TL;DR: In this paper, six approaches are discussed to justify the sample size in a quantitative empirical study: collecting data from (an) almost) the entire population, choosing a sample size based on resource constraints, performing an a-priori power analysis, planning for a desired accuracy, using heuristics, or explicitly acknowledging the absence of a justification.
Journal ArticleDOI

A practical primer to power analysis for simple experimental designs

TL;DR: In this paper, the focus is on applications of power analysis for experimental designs often encountered in psychology, starting from simple two-group independent and paired groups and moving to one-way analysis of variance, factorial designs, contrast analysis, trend analysis, regression analysis, analysis of covariance, and mediation analysis.
Journal ArticleDOI

Resting-state functional brain connectivity best predicts the personality dimension of openness to experience.

TL;DR: Openness to experience emerged as the only reliably predicted personality factor and was derived from a principal components analysis of the Neuroticism/Extraversion/Openness Five-Factor Inventory factor scores, thereby reducing noise and enhancing the precision of these measures of personality.
Journal ArticleDOI

Publication bias examined in meta-analyses from psychology and medicine: A meta-meta-analysis.

TL;DR: Examination of evidence of publication bias in a large-scale data set of primary studies that were included in 83 meta-analyses published in Psychological Bulletin and 499 systematic reviews from the Cochrane Database of Systematic Reviews found evidence of publications bias that appeared to be similar in both psychology and medicine.
References
More filters
Book

Statistical Power Analysis for the Behavioral Sciences

TL;DR: The concepts of power analysis are discussed in this paper, where Chi-square Tests for Goodness of Fit and Contingency Tables, t-Test for Means, and Sign Test are used.
Journal ArticleDOI

G*Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences

TL;DR: G*Power 3 provides improved effect size calculators and graphic options, supports both distribution-based and design-based input modes, and offers all types of power analyses in which users might be interested.
Journal ArticleDOI

Power failure: why small sample size undermines the reliability of neuroscience

TL;DR: It is shown that the average statistical power of studies in the neurosciences is very low, and the consequences include overestimates of effect size and low reproducibility of results.
Journal ArticleDOI

Estimating the reproducibility of psychological science

Alexander A. Aarts, +290 more
- 28 Aug 2015 - 
TL;DR: A large-scale assessment suggests that experimental reproducibility in psychology leaves a lot to be desired, and correlational tests suggest that replication success was better predicted by the strength of original evidence than by characteristics of the original and replication teams.
Related Papers (5)

Estimating the reproducibility of psychological science

Alexander A. Aarts, +290 more
- 28 Aug 2015 -