scispace - formally typeset
Open AccessJournal ArticleDOI

Using Bayes to get the most out of non-significant results

Zoltan Dienes
- 29 Jul 2014 - 
- Vol. 5, pp 781-781
Reads0
Chats0
TLDR
It is argued Bayes factors allow theory to be linked to data in a way that overcomes the weaknesses of the other approaches, and provides a coherent approach to determining whether non-significant results support a null hypothesis over a theory, or whether the data are just insensitive.
Abstract
No scientific conclusion follows automatically from a statistically non-significant result, yet people routinely use non-significant results to guide conclusions about the status of theories (or the effectiveness of practices). To know whether a non-significant result counts against a theory, or if it just indicates data insensitivity, researchers must use one of: power, intervals (such as confidence or credibility intervals), or else an indicator of the relative evidence for one theory over another, such as a Bayes factor. I argue Bayes factors allow theory to be linked to data in a way that overcomes the weaknesses of the other approaches. Specifically, Bayes factors use the data themselves to determine their sensitivity in distinguishing theories (unlike power), and they make use of those aspects of a theory’s predictions that are often easiest to specify (unlike power and intervals, which require specifying the minimal interesting value in order to address theory). Bayes factors provide a coherent approach to determining whether non-significant results support a null hypothesis over a theory, or whether the data are just insensitive. They allow accepting and rejecting the null hypothesis to be put on an equal footing. Concrete examples are provided to indicate the range of application of a simple online Bayes calculator, which reveal both the strengths and weaknesses of Bayes factors.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

Looking Forward: The Effect of the Best-Possible-Self Intervention on Thriving Through Relative Intrinsic Goal Pursuits

TL;DR: This article found that an increase in the importance individuals place on intrinsic rather than extrinsic goal pursuits (relative intrinsic goal pursuits; RIGP) explains the effectiveness of the best possible self (BPS) intervention.
Journal ArticleDOI

How to use and report Bayesian hypothesis tests

Zoltan Dienes
Abstract: This article provides guidance on interpreting and reporting Bayesian hypothesis tests, to aid their understanding. To use and report a Bayesian hypothesis test, predicted effect sizes must be specified. The article will provide guidance in specifying effect sizes of interest (which also will be of relevance to those using frequentist statistics). First, if a minimally interesting effect size can be specified, a null interval is defined as the effects smaller in magnitude than the minimally interesting effect. Then the proportion of the posterior distribution that falls in the null interval indicates the plausibility of the null interval hypothesis. Second, if a rough scale of effect can be determined, a Bayes factor can indicate evidence for a model representing that scale of effect versus a model of the null hypothesis. Both methods allow data to count against a theory that predicts a difference. By contrast, nonsignificance does not count against such a theory. Various examples are provided including the suitability of Bayesian analyses for demonstrating the absence of conscious perception under putative subliminal conditions, and its presence in supraliminal conditions.
Journal ArticleDOI

A randomized controlled study of power posing before public speaking exposure for social anxiety disorder: No evidence for augmentative effects.

TL;DR: Though the intervention resulted in decreased SAD symptom severity one week later, analyses revealed no significant between-group differences on any tested variables, and this study provides no evidence to suggest that power posing impacts hormone levels or exposure therapy outcomes.
Journal ArticleDOI

Skin Conductance Responses to Masked Emotional Faces Are Modulated by Hit Rate but Not Signal Detection Theory Adjustments for Subjective Differences in the Detection Threshold

TL;DR: The findings reveal that HR adjustments in the detection threshold allow higher skin conductance responses to happy, fearful, and angry faces, but that this effect could not be reported by the same participants when the adjustments were made using signal detection measures.
Journal ArticleDOI

Slow touch targeting CT-fibres does not increase prosocial behaviour in economic laboratory tasks.

TL;DR: Under the controlled laboratory conditions employed, CT-targeted touch did not play a particular role in prosocial behaviour, which indicates that touch does not increase Prosocial behaviour in the absence of meaningful social and psychological connotations.
References
More filters
Book

Statistical Power Analysis for the Behavioral Sciences

TL;DR: The concepts of power analysis are discussed in this paper, where Chi-square Tests for Goodness of Fit and Contingency Tables, t-Test for Means, and Sign Test are used.
Book

Model Selection and Multimodel Inference: A Practical Information-Theoretic Approach

TL;DR: The second edition of this book is unique in that it focuses on methods for making formal statistical inference from all the models in an a priori set (Multi-Model Inference).
Journal ArticleDOI

Statistical power analyses using G*Power 3.1: tests for correlation and regression analyses.

TL;DR: In the new version, procedures to analyze the power of tests based on single-sample tetrachoric correlations, comparisons of dependent correlations, bivariate linear regression, multiple linear regression based on the random predictor model, logistic regression, and Poisson regression are added.
Journal ArticleDOI

Bayesian data analysis.

TL;DR: A fatal flaw of NHST is reviewed and some benefits of Bayesian data analysis are introduced and illustrative examples of multiple comparisons in Bayesian analysis of variance and Bayesian approaches to statistical power are presented.
Journal ArticleDOI

Power failure: why small sample size undermines the reliability of neuroscience

TL;DR: It is shown that the average statistical power of studies in the neurosciences is very low, and the consequences include overestimates of effect size and low reproducibility of results.
Related Papers (5)