scispace - formally typeset
Open AccessPosted Content

Suppressing Random Walks in Markov Chain Monte Carlo Using Ordered Overrelaxation

Reads0
Chats0
TLDR
In this paper, an overrelaxed Markov chain Monte Carlo (MCMC) algorithm based on order statistics has been proposed, which can be applied whenever the full conditional distributions are such that their cumulative distribution functions and inverse cumulative distributions can be efficiently computed.
Abstract
Markov chain Monte Carlo methods such as Gibbs sampling and simple forms of the Metropolis algorithm typically move about the distribution being sampled via a random walk. For the complex, high-dimensional distributions commonly encountered in Bayesian inference and statistical physics, the distance moved in each iteration of these algorithms will usually be small, because it is difficult or impossible to transform the problem to eliminate dependencies between variables. The inefficiency inherent in taking such small steps is greatly exacerbated when the algorithm operates via a random walk, as in such a case moving to a point n steps away will typically take around n^2 iterations. Such random walks can sometimes be suppressed using ``overrelaxed'' variants of Gibbs sampling (a.k.a. the heatbath algorithm), but such methods have hitherto been largely restricted to problems where all the full conditional distributions are Gaussian. I present an overrelaxed Markov chain Monte Carlo algorithm based on order statistics that is more widely applicable. In particular, the algorithm can be applied whenever the full conditional distributions are such that their cumulative distribution functions and inverse cumulative distribution functions can be efficiently computed. The method is demonstrated on an inference problem for a simple hierarchical Bayesian model.

read more

Citations
More filters

Pattern Recognition and Machine Learning

TL;DR: Probability distributions of linear models for regression and classification are given in this article, along with a discussion of combining models and combining models in the context of machine learning and classification.
Book

Information Theory, Inference and Learning Algorithms

TL;DR: A fun and exciting textbook on the mathematics underpinning the most dynamic areas of modern science and engineering.
Journal ArticleDOI

Monte Carlo methods

TL;DR: The basic principles and the most common Monte Carlo algorithms are reviewed, among which rejection sampling, importance sampling and Monte Carlo Markov chain (MCMC) methods are reviewed.
Journal ArticleDOI

Markov Chain Monte Carlo in Practice: A Roundtable Discussion

TL;DR: In this paper, the authors present advice and guidance to novice users of MCMC and not-so-novice users as well, including building confidence in simulation results, methods for speeding and assessing convergence, estimating standard error, etc.
Book ChapterDOI

Introduction to Monte Carlo methods

TL;DR: In this paper, a sequence of Monte Carlo methods, namely importance sampling, rejection sampling, the Metropolis method, and Gibbs sampling, are described and a discussion of advanced methods, including methods for reducing random walk behaviour is presented.
References
More filters
Journal ArticleDOI

Equation of state calculations by fast computing machines

TL;DR: In this article, a modified Monte Carlo integration over configuration space is used to investigate the properties of a two-dimensional rigid-sphere system with a set of interacting individual molecules, and the results are compared to free volume equations of state and a four-term virial coefficient expansion.
Journal ArticleDOI

Monte Carlo Sampling Methods Using Markov Chains and Their Applications

TL;DR: A generalization of the sampling method introduced by Metropolis et al. as mentioned in this paper is presented along with an exposition of the relevant theory, techniques of application and methods and difficulties of assessing the error in Monte Carlo estimates.
Journal Article

Sampling-based approaches to calculating marginal densities

TL;DR: Stochastic substitution, the Gibbs sampler, and the sampling-importance-resampling algorithm can be viewed as three alternative sampling- (or Monte Carlo-) based approaches to the calculation of numerical estimates of marginal probability distributions.
Book

Bayesian learning for neural networks

TL;DR: Bayesian Learning for Neural Networks shows that Bayesian methods allow complex neural network models to be used without fear of the "overfitting" that can occur with traditional neural network learning methods.
Journal ArticleDOI

Hybrid Monte Carlo

TL;DR: In this article, a hybrid (molecular dynamics/Langevin) algorithm is used to guide a Monte Carlo simulation of lattice field theory, which is especially efficient for quantum chromodynamics which contain fermionic degrees of freedom.
Related Papers (5)