scispace - formally typeset
Open AccessJournal ArticleDOI

Using Bayes to get the most out of non-significant results

Zoltan Dienes
- 29 Jul 2014 - 
- Vol. 5, pp 781-781
Reads0
Chats0
TLDR
It is argued Bayes factors allow theory to be linked to data in a way that overcomes the weaknesses of the other approaches, and provides a coherent approach to determining whether non-significant results support a null hypothesis over a theory, or whether the data are just insensitive.
Abstract
No scientific conclusion follows automatically from a statistically non-significant result, yet people routinely use non-significant results to guide conclusions about the status of theories (or the effectiveness of practices). To know whether a non-significant result counts against a theory, or if it just indicates data insensitivity, researchers must use one of: power, intervals (such as confidence or credibility intervals), or else an indicator of the relative evidence for one theory over another, such as a Bayes factor. I argue Bayes factors allow theory to be linked to data in a way that overcomes the weaknesses of the other approaches. Specifically, Bayes factors use the data themselves to determine their sensitivity in distinguishing theories (unlike power), and they make use of those aspects of a theory’s predictions that are often easiest to specify (unlike power and intervals, which require specifying the minimal interesting value in order to address theory). Bayes factors provide a coherent approach to determining whether non-significant results support a null hypothesis over a theory, or whether the data are just insensitive. They allow accepting and rejecting the null hypothesis to be put on an equal footing. Concrete examples are provided to indicate the range of application of a simple online Bayes calculator, which reveal both the strengths and weaknesses of Bayes factors.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

Unconscious sources of familiarity can be strategically excluded in support of conscious task demands.

TL;DR: This paper investigated the circumstances under which judgments of familiarity are sensitive to task-irrelevant sources using the artificial grammar learning paradigm, a task known to be heavily reliant on familiarity-based responding.
Posted ContentDOI

Sensitivity to changes in rate of heartbeats as a measure of interoceptive ability.

TL;DR: Results indicate an overall tendency to report fewer heartbeats during accelerations in heart rate, which may be driven in part by respiration, with a reduction in heartbeat salience during inspiratory periods when heart rate typically increases.
Journal ArticleDOI

Perceiving Time Differences When You Should Not: Applying the El Greco Fallacy to Hypnotic Time Distortions

TL;DR: The findings conform to an El Greco fallacy effect and challenge theories of hypnotic time distortion arguing that “trance” itself changes subjective time.
Journal ArticleDOI

A closer look at children's metacognitive skills: The case of the distinctiveness heuristic.

TL;DR: Overall, Experiments 1 and 2 provide evidence that children as young as 4 years rely on the distinctiveness heuristic to guide their memory decisions, resulting in a reduction in the false recognition rate when items are presented using a pure-list design but not when they are presenting using a mixed- list design.
Journal ArticleDOI

A model statement does not enhance the verifiability approach

TL;DR: The authors investigated the effect of providing participants with a model statement on the ability of the verifiability approach to detect deception and found that the model statement encouraged participants to give a longer and more detailed statement.
References
More filters
Book

Statistical Power Analysis for the Behavioral Sciences

TL;DR: The concepts of power analysis are discussed in this paper, where Chi-square Tests for Goodness of Fit and Contingency Tables, t-Test for Means, and Sign Test are used.
Book

Model Selection and Multimodel Inference: A Practical Information-Theoretic Approach

TL;DR: The second edition of this book is unique in that it focuses on methods for making formal statistical inference from all the models in an a priori set (Multi-Model Inference).
Journal ArticleDOI

Statistical power analyses using G*Power 3.1: tests for correlation and regression analyses.

TL;DR: In the new version, procedures to analyze the power of tests based on single-sample tetrachoric correlations, comparisons of dependent correlations, bivariate linear regression, multiple linear regression based on the random predictor model, logistic regression, and Poisson regression are added.
Journal ArticleDOI

Bayesian data analysis.

TL;DR: A fatal flaw of NHST is reviewed and some benefits of Bayesian data analysis are introduced and illustrative examples of multiple comparisons in Bayesian analysis of variance and Bayesian approaches to statistical power are presented.
Journal ArticleDOI

Power failure: why small sample size undermines the reliability of neuroscience

TL;DR: It is shown that the average statistical power of studies in the neurosciences is very low, and the consequences include overestimates of effect size and low reproducibility of results.
Related Papers (5)