Journal ArticleDOI
Markov Chain Monte Carlo Convergence Diagnostics: A Comparative Review
TLDR
All of the methods in this work can fail to detect the sorts of convergence failure that they were designed to identify, so a combination of strategies aimed at evaluating and accelerating MCMC sampler convergence are recommended.Abstract:
A critical issue for users of Markov chain Monte Carlo (MCMC) methods in applications is how to determine when it is safe to stop sampling and use the samples to estimate characteristics of the distribution of interest. Research into methods of computing theoretical convergence bounds holds promise for the future but to date has yielded relatively little of practical use in applied work. Consequently, most MCMC users address the convergence problem by applying diagnostic tools to the output produced by running their samplers. After giving a brief overview of the area, we provide an expository review of 13 convergence diagnostics, describing the theoretical basis and practical implementation of each. We then compare their performance in two simple models and conclude that all of the methods can fail to detect the sorts of convergence failure that they were designed to identify. We thus recommend a combination of strategies aimed at evaluating and accelerating MCMC sampler convergence, including ap...read more
Citations
More filters
Estimating convergence of Markov chain Monte Carlo simulations
TL;DR: A diagnostic for estimating the burn-in of the chain of Markov chain Monte Carlo methods is constructed and an algorithmic procedure for verifying convergence and sufficient run length of an MCMC-simulation is proposed.
Journal ArticleDOI
BINARY REGRESSION WITH A CLASS OF SKEWED t LINK MODELS
TL;DR: A class of skewed t link models designed to improve the overall fit when commonly used symmetric links, such as the logit and probit links, do not provide the best fit available for a given binary response dataset.
Journal ArticleDOI
Bayesian inference for finite mixtures in confirmatory factor analysis
TL;DR: In this paper, the Gibbs sampler was used to estimate multivariate normal mixtures whose means and covariance matrices are structured as confirmatory factor analysis models, without relying on the asymptotic theory nor on any other sophisticated MCMC methods.
Journal ArticleDOI
A random effects modelling approach to the crossing-fibre problem in tractography.
TL;DR: The results indicate that random effects modelling provides a useful alternative to current methods documented in the MR tractography literature, with a focus on the crossing-fibre problem.
References
More filters
Journal ArticleDOI
Equation of state calculations by fast computing machines
TL;DR: In this article, a modified Monte Carlo integration over configuration space is used to investigate the properties of a two-dimensional rigid-sphere system with a set of interacting individual molecules, and the results are compared to free volume equations of state and a four-term virial coefficient expansion.
Journal ArticleDOI
Stochastic Relaxation, Gibbs Distributions, and the Bayesian Restoration of Images
Stuart Geman,Donald Geman +1 more
TL;DR: The analogy between images and statistical mechanics systems is made and the analogous operation under the posterior distribution yields the maximum a posteriori (MAP) estimate of the image given the degraded observations, creating a highly parallel ``relaxation'' algorithm for MAP estimation.
Journal ArticleDOI
Monte Carlo Sampling Methods Using Markov Chains and Their Applications
TL;DR: A generalization of the sampling method introduced by Metropolis et al. as mentioned in this paper is presented along with an exposition of the relevant theory, techniques of application and methods and difficulties of assessing the error in Monte Carlo estimates.
Journal ArticleDOI
Inference from Iterative Simulation Using Multiple Sequences
Andrew Gelman,Donald B. Rubin +1 more
TL;DR: The focus is on applied inference for Bayesian posterior distributions in real problems, which often tend toward normal- ity after transformations and marginalization, and the results are derived as normal-theory approximations to exact Bayesian inference, conditional on the observed simulations.
Journal ArticleDOI
Robust Locally Weighted Regression and Smoothing Scatterplots
TL;DR: Robust locally weighted regression as discussed by the authors is a method for smoothing a scatterplot, in which the fitted value at z k is the value of a polynomial fit to the data using weighted least squares, where the weight for (x i, y i ) is large if x i is close to x k and small if it is not.