scispace - formally typeset
Open AccessJournal ArticleDOI

Using Bayes to get the most out of non-significant results

Zoltan Dienes
- 29 Jul 2014 - 
- Vol. 5, pp 781-781
TLDR
It is argued Bayes factors allow theory to be linked to data in a way that overcomes the weaknesses of the other approaches, and provides a coherent approach to determining whether non-significant results support a null hypothesis over a theory, or whether the data are just insensitive.
Abstract
No scientific conclusion follows automatically from a statistically non-significant result, yet people routinely use non-significant results to guide conclusions about the status of theories (or the effectiveness of practices). To know whether a non-significant result counts against a theory, or if it just indicates data insensitivity, researchers must use one of: power, intervals (such as confidence or credibility intervals), or else an indicator of the relative evidence for one theory over another, such as a Bayes factor. I argue Bayes factors allow theory to be linked to data in a way that overcomes the weaknesses of the other approaches. Specifically, Bayes factors use the data themselves to determine their sensitivity in distinguishing theories (unlike power), and they make use of those aspects of a theory’s predictions that are often easiest to specify (unlike power and intervals, which require specifying the minimal interesting value in order to address theory). Bayes factors provide a coherent approach to determining whether non-significant results support a null hypothesis over a theory, or whether the data are just insensitive. They allow accepting and rejecting the null hypothesis to be put on an equal footing. Concrete examples are provided to indicate the range of application of a simple online Bayes calculator, which reveal both the strengths and weaknesses of Bayes factors.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

Fisher, Neyman-Pearson or NHST? A tutorial for teaching data testing

TL;DR: A tutorial for the teaching of data testing procedures, often referred to as hypothesis testing theories, which introduces Fisher's approach to data testing—tests of significance; the second is Neyman-Pearson's approach— tests of acceptance; the final procedure is the incongruent combination of the previous two theories into the current approach—NSHT.
Book ChapterDOI

How can we measure awareness? An overview of current methods

TL;DR: For instance, this article proposed a method to measure the contents of awareness directly using a consciousness-meter, which can make it possible to establish clear relationships between an external state of affairs, people's subjective experience of this state of events, and their overt behavior.
Journal ArticleDOI

How to quantify the evidence for the absence of a correlation

TL;DR: A suite of Bayes factor hypothesis tests that allow researchers to grade the decisiveness of the evidence that the data provide for the presence versus the absence of a correlation between two variables are presented.
Journal ArticleDOI

Using Bayes factors for testing hypotheses about intervention effectiveness in addictions research

TL;DR: Use of Bayes factors when analysing data from randomized trials of interventions in addiction research can provide important information that would lead to more precise conclusions than are obtained typically using currently prevailing methods.
Journal ArticleDOI

Increased prefrontal activity with aging reflects nonspecific neural responses rather than compensation

TL;DR: In this paper, the authors used a model-based multivariate analysis technique applied to two independent fMRI datasets from an adult-lifespan human sample (N = 123 and N = 115; approximately half female).
References
More filters
Book

Statistical Power Analysis for the Behavioral Sciences

TL;DR: The concepts of power analysis are discussed in this paper, where Chi-square Tests for Goodness of Fit and Contingency Tables, t-Test for Means, and Sign Test are used.
Book

Model Selection and Multimodel Inference: A Practical Information-Theoretic Approach

TL;DR: The second edition of this book is unique in that it focuses on methods for making formal statistical inference from all the models in an a priori set (Multi-Model Inference).
Journal ArticleDOI

Statistical power analyses using G*Power 3.1: tests for correlation and regression analyses.

TL;DR: In the new version, procedures to analyze the power of tests based on single-sample tetrachoric correlations, comparisons of dependent correlations, bivariate linear regression, multiple linear regression based on the random predictor model, logistic regression, and Poisson regression are added.
Journal ArticleDOI

Bayesian data analysis.

TL;DR: A fatal flaw of NHST is reviewed and some benefits of Bayesian data analysis are introduced and illustrative examples of multiple comparisons in Bayesian analysis of variance and Bayesian approaches to statistical power are presented.
Journal ArticleDOI

Power failure: why small sample size undermines the reliability of neuroscience

TL;DR: It is shown that the average statistical power of studies in the neurosciences is very low, and the consequences include overestimates of effect size and low reproducibility of results.
Related Papers (5)