scispace - formally typeset
Open AccessJournal ArticleDOI

Using Bayes to get the most out of non-significant results

Zoltan Dienes
- 29 Jul 2014 - 
- Vol. 5, pp 781-781
Reads0
Chats0
TLDR
It is argued Bayes factors allow theory to be linked to data in a way that overcomes the weaknesses of the other approaches, and provides a coherent approach to determining whether non-significant results support a null hypothesis over a theory, or whether the data are just insensitive.
Abstract
No scientific conclusion follows automatically from a statistically non-significant result, yet people routinely use non-significant results to guide conclusions about the status of theories (or the effectiveness of practices). To know whether a non-significant result counts against a theory, or if it just indicates data insensitivity, researchers must use one of: power, intervals (such as confidence or credibility intervals), or else an indicator of the relative evidence for one theory over another, such as a Bayes factor. I argue Bayes factors allow theory to be linked to data in a way that overcomes the weaknesses of the other approaches. Specifically, Bayes factors use the data themselves to determine their sensitivity in distinguishing theories (unlike power), and they make use of those aspects of a theory’s predictions that are often easiest to specify (unlike power and intervals, which require specifying the minimal interesting value in order to address theory). Bayes factors provide a coherent approach to determining whether non-significant results support a null hypothesis over a theory, or whether the data are just insensitive. They allow accepting and rejecting the null hypothesis to be put on an equal footing. Concrete examples are provided to indicate the range of application of a simple online Bayes calculator, which reveal both the strengths and weaknesses of Bayes factors.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

Demand Characteristics Confound the Rubber Hand Illusion

TL;DR: In this article, a quasi-experiment design was employed to test demand characteristics in rubber hand illusion reports and recorded expectancies for standard ‘illusion’ and ‘control’ statements in synchronous and asynchronous conditions.
Journal ArticleDOI

A Bayesian bird's eye view of 'Replications of important results in social psychology'.

TL;DR: Three Bayesian methods were applied to reanalyse the preregistered contributions to the Social Psychology special issue ‘Replications of Important Results in Social Psychology’ to find evidence of weak support for the null hypothesis against a default one-sided alternative.
Journal ArticleDOI

Credit Assignment in a Motor Decision Making Task Is Influenced by Agency and Not Sensory Prediction Errors.

TL;DR: Test a specific hypothesis that execution errors are implicitly signaled by cerebellar-based sensory prediction errors, and evaluate this hypothesis and compare it with a more “top-down” hypothesis in which the modulation of choice behavior from execution errors reflects participants' sense of agency, finding that sensory predictions have no significant effect on reinforcement learning.
Journal ArticleDOI

Using Bayes factors to evaluate evidence for no effect: examples from the SIPS project

TL;DR: It is shown how Bayes factors can disambiguate the non-significant findings from the SIPS project and thus determine whether the findings represent evidence of absence or absence of evidence.
Journal ArticleDOI

Objective Facebook behaviour

TL;DR: Objective Facebook behaviours should be considered in tackling PFU, and the analysis of data goes beyond the self-reported information about such activities, and helps to understand the role of its potentially addictive activities in predicting PFU.
References
More filters
Book

Statistical Power Analysis for the Behavioral Sciences

TL;DR: The concepts of power analysis are discussed in this paper, where Chi-square Tests for Goodness of Fit and Contingency Tables, t-Test for Means, and Sign Test are used.
Book

Model Selection and Multimodel Inference: A Practical Information-Theoretic Approach

TL;DR: The second edition of this book is unique in that it focuses on methods for making formal statistical inference from all the models in an a priori set (Multi-Model Inference).
Journal ArticleDOI

Statistical power analyses using G*Power 3.1: tests for correlation and regression analyses.

TL;DR: In the new version, procedures to analyze the power of tests based on single-sample tetrachoric correlations, comparisons of dependent correlations, bivariate linear regression, multiple linear regression based on the random predictor model, logistic regression, and Poisson regression are added.
Journal ArticleDOI

Bayesian data analysis.

TL;DR: A fatal flaw of NHST is reviewed and some benefits of Bayesian data analysis are introduced and illustrative examples of multiple comparisons in Bayesian analysis of variance and Bayesian approaches to statistical power are presented.
Journal ArticleDOI

Power failure: why small sample size undermines the reliability of neuroscience

TL;DR: It is shown that the average statistical power of studies in the neurosciences is very low, and the consequences include overestimates of effect size and low reproducibility of results.
Related Papers (5)