scispace - formally typeset
Open AccessJournal ArticleDOI

Irreversible Langevin samplers and variance reduction: a large deviations approach

TLDR
In this article, the authors show that the addition of an irreversible drift leads to a larger rate function and strictly improves the speed of convergence of ergodic average for (generic smooth) observables.
Abstract
In order to sample from a given target distribution (often of Gibbs type), the Monte Carlo Markov chain method consists of constructing an ergodic Markov process whose invariant measure is the target distribution. By sampling the Markov process one can then compute, approximately, expectations of observables with respect to the target distribution. Often the Markov processes used in practice are time-reversible (i.e. they satisfy detailed balance), but our main goal here is to assess and quantify how the addition of a non-reversible part to the process can be used to improve the sampling properties. We focus on the diffusion setting (overdamped Langevin equations) where the drift consists of a gradient vector field as well as another drift which breaks the reversibility of the process but is chosen to preserve the Gibbs measure. In this paper we use the large deviation rate function for the empirical measure as a tool to analyze the speed of convergence to the invariant measure. We show that the addition of an irreversible drift leads to a larger rate function and it strictly improves the speed of convergence of ergodic average for (generic smooth) observables. We also deduce from this result that the asymptotic variance decreases under the addition of the irreversible drift and we give an explicit characterization of the observables whose variance is not reduced reduced, in terms of a nonlinear Poisson equation. Our theoretical results are illustrated and supplemented by numerical simulations.

read more

Citations
More filters
Journal ArticleDOI

Partial differential equations and stochastic methods in molecular dynamics

TL;DR: This review describes how techniques from the analysis of partial differential equations can be used to devise good algorithms and to quantify their efficiency and accuracy.
Journal ArticleDOI

Variance reduction Using nonreversible Langevin samplers

TL;DR: In this paper, a detailed study of the dependence of the asymptotic variance on the deviation from reversibility is presented, and the theoretical findings are supported by numerical simulations.
Journal ArticleDOI

The Zig-Zag Process and Super-Efficient Sampling for Bayesian Analysis of Big Data

TL;DR: In this article, a new family of Monte Carlo methods based upon a multi-dimensional version of the Zig-Zag process of (Bierkens, Roberts, 2017), a continuous time piecewise deterministic Markov process is introduced.
Journal ArticleDOI

Measuring sample quality with diffusions

TL;DR: In this article, a new class of characterizing operators based on Ito diffusions was introduced and explicit multivariate Stein factor bounds for any target with a fast-coupling Ito diffusion were developed.
Journal ArticleDOI

A piecewise deterministic scaling limit of Lifted Metropolis-Hastings in the Curie-Weiss model

TL;DR: Turitsyn, Chertkov, Vucelja as discussed by the authors derived the scaling limit of the magnetization process in the Curie-Weiss model for Lifted Metropolis-Hastings (LMH).
References
More filters
Book

Stochastic Simulation

TL;DR: Brian D. Ripley's Stochastic Simulation is a short, yet ambitious, survey of modern simulation techniques, and three themes run throughout the book.
Journal ArticleDOI

General state space Markov chains and MCMC algorithms

TL;DR: In this paper, a survey of results about Markov chains on non-countable state spaces is presented, along with necessary and sufficient conditions for geometrical and uniform ergodicity along with quantitative bounds on the rate of convergence to stationarity.
Journal ArticleDOI

Rates of convergence of the Hastings and Metropolis algorithms

TL;DR: Recent results in Markov chain theory are applied to Hastings and Metropolis algorithms with either independent or symmetric candidate distributions, and it is shown geometric convergence essentially occurs if and only if $pi$ has geometric tails.
Journal ArticleDOI

Optimum Monte-Carlo sampling using Markov chains

P. H. Peskun
- 01 Dec 1973 - 
TL;DR: In this paper, the relative merits of the two simple choices for sij suggested by Hastings (1970) are discussed and the optimum choice for Sij is shown to be one of these.
Related Papers (5)