scispace - formally typeset
Open AccessPosted Content

Workflow Techniques for the Robust Use of Bayes Factors

Reads0
Chats0
TLDR
In this article, the authors provide a workflow to test the strengths and limitations of Bayes factors as a way to quantify evidence in support of scientific hypotheses, and illustrate this workflow using an example from the cognitive sciences.
Abstract
Inferences about hypotheses are ubiquitous in the cognitive sciences. Bayes factors provide one general way to compare different hypotheses by their compatibility with the observed data. Those quantifications can then also be used to choose between hypotheses. While Bayes factors provide an immediate approach to hypothesis testing, they are highly sensitive to details of the data/model assumptions. Moreover it's not clear how straightforwardly this approach can be implemented in practice, and in particular how sensitive it is to the details of the computational implementation. Here, we investigate these questions for Bayes factor analyses in the cognitive sciences. We explain the statistics underlying Bayes factors as a tool for Bayesian inferences and discuss that utility functions are needed for principled decisions on hypotheses. Next, we study how Bayes factors misbehave under different conditions. This includes a study of errors in the estimation of Bayes factors. Importantly, it is unknown whether Bayes factor estimates based on bridge sampling are unbiased for complex analyses. We are the first to use simulation-based calibration as a tool to test the accuracy of Bayes factor estimates. Moreover, we study how stable Bayes factors are against different MCMC draws. We moreover study how Bayes factors depend on variation in the data. We also look at variability of decisions based on Bayes factors and how to optimize decisions using a utility function. We outline a Bayes factor workflow that researchers can use to study whether Bayes factors are robust for their individual analysis, and we illustrate this workflow using an example from the cognitive sciences. We hope that this study will provide a workflow to test the strengths and limitations of Bayes factors as a way to quantify evidence in support of scientific hypotheses. Reproducible code is available from this https URL.

read more

Citations
More filters
Journal ArticleDOI

The Importance of Random Slopes in Mixed Models for Bayesian Hypothesis Testing

TL;DR: The authors showed that random slopes can lead to a substantial increase in false-positive conclusions in null-hypothesis tests and showed that the same is true for Bayesian hypothesis testing with mixed models, which often yield Bayes factors reflecting very strong evidence for a mean effect on the population level.
Journal ArticleDOI

Standardised empirical dispersal kernels emphasise the pervasiveness of long‐distance dispersal in European birds

TL;DR: In this article , a statistical framework was introduced to estimate standardised dispersal kernels from biased data, and the authors compared the different age and sex-specific kernels for European breeding birds considering age (average dispersal; natal, before first breeding; and breeding dispersal, between subsequent breeding attempts).
Journal ArticleDOI

Hidden Markov Models of Evidence Accumulation in Speeded Decision Tasks

TL;DR: This model is considered as a proof of principle that evidence accumulation models can be combined with Markov switching models and an extensive simulation study was conducted to validate the model's implementation according to principles of robust Bayesian workflow.
References
More filters
Book

Pattern Recognition and Machine Learning

TL;DR: Probability Distributions, linear models for Regression, Linear Models for Classification, Neural Networks, Graphical Models, Mixture Models and EM, Sampling Methods, Continuous Latent Variables, Sequential Data are studied.
Book

Bayesian Data Analysis

TL;DR: Detailed notes on Bayesian Computation Basics of Markov Chain Simulation, Regression Models, and Asymptotic Theorems are provided.
Book

Theory of probability

TL;DR: In this paper, the authors introduce the concept of direct probabilities, approximate methods and simplifications, and significant importance tests for various complications, including one new parameter, and various complications for frequency definitions and direct methods.
Journal ArticleDOI

Random effects structure for confirmatory hypothesis testing: Keep it maximal

TL;DR: It is argued that researchers using LMEMs for confirmatory hypothesis testing should minimally adhere to the standards that have been in place for many decades, and it is shown thatLMEMs generalize best when they include the maximal random effects structure justified by the design.
Journal ArticleDOI

WinBUGS – A Bayesian modelling framework: Concepts, structure, and extensibility

TL;DR: How and why various modern computing concepts, such as object-orientation and run-time linking, feature in the software's design are discussed and how the framework may be extended.
Related Papers (5)