scispace - formally typeset
Open AccessJournal ArticleDOI

Using Bayes to get the most out of non-significant results

Zoltan Dienes
- 29 Jul 2014 - 
- Vol. 5, pp 781-781
TLDR
It is argued Bayes factors allow theory to be linked to data in a way that overcomes the weaknesses of the other approaches, and provides a coherent approach to determining whether non-significant results support a null hypothesis over a theory, or whether the data are just insensitive.
Abstract
No scientific conclusion follows automatically from a statistically non-significant result, yet people routinely use non-significant results to guide conclusions about the status of theories (or the effectiveness of practices). To know whether a non-significant result counts against a theory, or if it just indicates data insensitivity, researchers must use one of: power, intervals (such as confidence or credibility intervals), or else an indicator of the relative evidence for one theory over another, such as a Bayes factor. I argue Bayes factors allow theory to be linked to data in a way that overcomes the weaknesses of the other approaches. Specifically, Bayes factors use the data themselves to determine their sensitivity in distinguishing theories (unlike power), and they make use of those aspects of a theory’s predictions that are often easiest to specify (unlike power and intervals, which require specifying the minimal interesting value in order to address theory). Bayes factors provide a coherent approach to determining whether non-significant results support a null hypothesis over a theory, or whether the data are just insensitive. They allow accepting and rejecting the null hypothesis to be put on an equal footing. Concrete examples are provided to indicate the range of application of a simple online Bayes calculator, which reveal both the strengths and weaknesses of Bayes factors.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

Caveats in science-based news stories communicate caution without lowering interest.

TL;DR: It is suggested that science writers should include caveats in news reporting and that they can do so without fear of disengaging their readers or losing news uptake.
Journal ArticleDOI

Judgment of Learning Accuracy in High-functioning Adolescents and Adults with Autism Spectrum Disorder.

TL;DR: It is suggested that JOL accuracy is unimpaired in ASD, which has important implications for both theories of metacognition in ASD and educational practise.
Journal ArticleDOI

Null hypothesis significance testing: a guide to commonly misunderstood concepts and recommendations for good practice

TL;DR: The concepts behind the method are summarized, distinguishing test of significance (Fisher) and test of acceptance (Newman-Pearson) and common interpretation errors regarding the p-value are pointed to and simple reporting practices are proposed.
Journal ArticleDOI

The effect of experience and olfactory cue in an inhibitory control task in guppies, Poecilia reticulata

TL;DR: The results seem to exclude methodological explanations for the high inhibitory control score of guppies, and they indicate that even teleost fish can display efficient inhibitorycontrol.
Journal ArticleDOI

Factors affecting the measure of inhibitory control in a fish (Poecilia reticulata).

TL;DR: The study revealed that some of the factors affecting inhibitory control in warm-blooded vertebrates also modulate the performance of fish, which should be taken into account when comparing this function across species.
References
More filters
Book

Statistical Power Analysis for the Behavioral Sciences

TL;DR: The concepts of power analysis are discussed in this paper, where Chi-square Tests for Goodness of Fit and Contingency Tables, t-Test for Means, and Sign Test are used.
Book

Model Selection and Multimodel Inference: A Practical Information-Theoretic Approach

TL;DR: The second edition of this book is unique in that it focuses on methods for making formal statistical inference from all the models in an a priori set (Multi-Model Inference).
Journal ArticleDOI

Statistical power analyses using G*Power 3.1: tests for correlation and regression analyses.

TL;DR: In the new version, procedures to analyze the power of tests based on single-sample tetrachoric correlations, comparisons of dependent correlations, bivariate linear regression, multiple linear regression based on the random predictor model, logistic regression, and Poisson regression are added.
Journal ArticleDOI

Bayesian data analysis.

TL;DR: A fatal flaw of NHST is reviewed and some benefits of Bayesian data analysis are introduced and illustrative examples of multiple comparisons in Bayesian analysis of variance and Bayesian approaches to statistical power are presented.
Journal ArticleDOI

Power failure: why small sample size undermines the reliability of neuroscience

TL;DR: It is shown that the average statistical power of studies in the neurosciences is very low, and the consequences include overestimates of effect size and low reproducibility of results.
Related Papers (5)