scispace - formally typeset
Open AccessPosted Content

Prior distributions for variance parameters in hierarchical models

Reads0
Chats0
TLDR
In this paper, a folded-noncentral-t family of conditionally conjugate priors for hierarchical standard deviation pa- rameters is proposed, and weakly informative priors in this family are considered.
Abstract
Various noninformative prior distributions have been suggested for scale parameters in hierarchical models. We construct a new folded-noncentral-t family of conditionally conjugate priors for hierarchical standard deviation pa- rameters, and then consider noninformative and weakly informative priors in this family. We use an example to illustrate serious problems with the inverse-gamma family of "noninformative" prior distributions. We suggest instead to use a uni- form prior on the hierarchical standard deviation, using the half-t family when the number of groups is small and in other settings where a weakly informative prior is desired. We also illustrate the use of the half-t family for hierarchical modeling of multiple variance parameters such as arise in the analysis of variance.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

brms: An R Package for Bayesian Multilevel Models Using Stan

TL;DR: The brms package implements Bayesian multilevel models in R using the probabilistic programming language Stan, allowing users to fit linear, robust linear, binomial, Poisson, survival, ordinal, zero-inflated, hurdle, and even non-linear models all in a multileVEL context.
Journal ArticleDOI

Approximate Bayesian inference for latent Gaussian models by using integrated nested Laplace approximations

TL;DR: This work considers approximate Bayesian inference in a popular subset of structured additive regression models, latent Gaussian models, where the latent field is Gaussian, controlled by a few hyperparameters and with non‐Gaussian response variables and can directly compute very accurate approximations to the posterior marginals.
Journal ArticleDOI

MCMC Methods for Multi-Response Generalized Linear Mixed Models: The MCMCglmm R Package

TL;DR: The R package MCMCglmm implements Markov chain Monte Carlo methods for generalized linear mixed models, which provide a flexible framework for modeling a range of data, although with non-Gaussian response variables the likelihood cannot be obtained in closed form.
Journal ArticleDOI

The BUGS project: Evolution, critique and future directions

TL;DR: A balanced critical appraisal of the BUGS software is provided, highlighting how various ideas have led to unprecedented flexibility while at the same time producing negative side effects.
Journal ArticleDOI

A re-evaluation of random-effects meta-analysis

TL;DR: It is suggested that random-effects meta-analyses as currently conducted often fail to provide the key results, and the extent to which distribution-free, classical and Bayesian approaches can provide satisfactory methods is investigated.
References
More filters
Journal ArticleDOI

Sampling-Based Approaches to Calculating Marginal Densities

TL;DR: In this paper, three sampling-based approaches, namely stochastic substitution, the Gibbs sampler, and the sampling-importance-resampling algorithm, are compared and contrasted in relation to various joint probability structures frequently encountered in applications.
Book

Bayesian inference in statistical analysis

TL;DR: In this article, the effect of non-normality on inference about a population mean with generalizations was investigated. But the authors focused on the effect on the mean with information from more than one source.
Book

Introducing multilevel modeling

TL;DR: Introduction Overview of Contextual Models Varying and Random Coefficient Models Analyses Frequently Asked Questions
Book ChapterDOI

Estimation with Quadratic Loss

TL;DR: In this paper, the authors consider the problem of finding the best unbiased estimator of a linear function of the mean of a set of observed random variables. And they show that for large samples the maximum likelihood estimator approximately minimizes the mean squared error when compared with other reasonable estimators.
Related Papers (5)