scispace - formally typeset
Journal ArticleDOI

Complexity bounds for Markov chain Monte Carlo algorithms via diffusion limits

Reads0
Chats0
TLDR
It is proved that under appropriate assumptions, the random-walk Metropolis algorithm in d dimensions takes O(d) iterations to converge to stationarity, while the Metropolis-adjusted Langevin algorithm takes O (d1/3) iterations.
Abstract
We connect known results about diffusion limits of Markov chain Monte Carlo (MCMC) algorithms to the computer science notion of algorithm complexity. Our main result states that any weak limit of a Markov process implies a corresponding complexity bound (in an appropriate metric). We then combine this result with previously-known MCMC diffusion limit results to prove that under appropriate assumptions, the random-walk Metropolis algorithm in d dimensions takes O(d) iterations to converge to stationarity, while the Metropolis-adjusted Langevin algorithm takes O(d1/3) iterations to converge to stationarity.

read more

Citations
More filters
Posted Content

Rapid Mixing of Hamiltonian Monte Carlo on Strongly Log-Concave Distributions

Oren Mangoubi, +1 more
- 23 Aug 2017 - 
TL;DR: In this paper, the mixing properties of the Hamiltonian Monte Carlo (HMC) algorithm for a strongly log-concave target distribution were studied and it was shown that HMC mixes quickly in this setting.
Journal ArticleDOI

Sampling can be faster than optimization.

TL;DR: In this paper, the authors examine a class of nonconvex objective functions that arise in mixture modeling and multistable systems and find that the computational complexity of sampling algorithms scales linearly with the model dimension while that of optimization algorithms scales exponentially.
Posted Content

Sampling Can Be Faster Than Optimization

TL;DR: This work examines a class of nonconvex objective functions that arise in mixture modeling and multistable systems and finds that the computational complexity of sampling algorithms scales linearly with the model dimension while that of optimization algorithms scales exponentially.
Posted Content

Randomized Hamiltonian Monte Carlo as Scaling Limit of the Bouncy Particle Sampler and Dimension-Free Convergence Rates

TL;DR: The Bouncy Particle Sampler (BPS) as mentioned in this paper is a Markov chain Monte Carlo method based on a nonreversible piecewise deterministic Markov process, where a particle explores the state space of interest by evolving according to a linear dynamics which is altered by bouncing on the hyperplane tangent to the gradient of the negative log-target density at the arrival times of an inhomogeneous Poisson Process (PP) and by randomly perturbing its velocity at the time of an homogeneous PP.
Posted Content

The Barker proposal: combining robustness and efficiency in gradient-based MCMC

TL;DR: This work proposes a novel and simple gradient‐based MCMC algorithm, inspired by the classical Barker accept‐reject rule, with improved robustness properties, and shows numerically that this type of robustness is particularly beneficial in the context of adaptive MCMC.
References
More filters
Journal ArticleDOI

Inference from Iterative Simulation Using Multiple Sequences

TL;DR: The focus is on applied inference for Bayesian posterior distributions in real problems, which often tend toward normal- ity after transformations and marginalization, and the results are derived as normal-theory approximations to exact Bayesian inference, conditional on the observed simulations.
Proceedings ArticleDOI

The complexity of theorem-proving procedures

TL;DR: It is shown that any recognition problem solved by a polynomial time-bounded nondeterministic Turing machine can be “reduced” to the problem of determining whether a given propositional formula is a tautology.
Book

Markov Processes: Characterization and Convergence

TL;DR: In this paper, the authors present a flowchart of generator and Markov Processes, and show that the flowchart can be viewed as a branching process of a generator.
BookDOI

MCMC using Hamiltonian dynamics

Radford M. Neal
- 09 Jun 2012 - 
TL;DR: In this paper, the authors discuss theoretical and practical aspects of Hamiltonian Monte Carlo, and present some of its variations, including using windows of states for deciding on acceptance or rejection, computing trajectories using fast approximations, tempering during the course of a trajectory to handle isolated modes, and short-cut methods that prevent useless trajectories from taking much computation time.
BookDOI

Handbook of Markov Chain Monte Carlo

TL;DR: A Markov chain Monte Carlo based analysis of a multilevel model for functional MRI data and its applications in environmental epidemiology, educational research, and fisheries science are studied.
Related Papers (5)