On particle Gibbs sampling
TLDR
This paper presents a coupling construction between two particle Gibbs updates from different starting points and shows that the coupling probability may be made arbitrarily close to one by increasing the number of particles, and extends particle Gibbs to work with lower variance resampling schemes.Abstract:
The particle Gibbs sampler is a Markov chain Monte Carlo (MCMC) algorithm to sample from the full posterior distribution of a state-space model. It does so by executing Gibbs sampling steps on an extended target distribution defined on the space of the auxiliary variables generated by an interacting particle system. This paper makes the following contributions to the theoretical study of this algorithm. Firstly, we present a coupling construction between two particle Gibbs updates from different starting points and we show that the coupling probability may be made arbitrarily close to one by increasing the number of particles. We obtain as a direct corollary that the particle Gibbs kernel is uniformly ergodic. Secondly, we show how the inclusion of an additional Gibbs sampling step that reselects the ancestors of the particle Gibbs' extended target distribution, which is a popular approach in practice to improve mixing, does indeed yield a theoretically more efficient algorithm as measured by the asymptotic variance. Thirdly, we extend particle Gibbs to work with lower variance resampling schemes. A detailed numerical study is provided to demonstrate the efficiency of particle Gibbs and the proposed variants.read more
Citations
More filters
Posted Content
Particle Gibbs with Ancestor Sampling
TL;DR: Particle Markov chain Monte Carlo (PMCMC) as discussed by the authors is a systematic way of combining the two main tools used for Monte Carlo statistical inference: SMC and MCMC.
Journal ArticleDOI
Unbiased Markov chain Monte Carlo methods with couplings
TL;DR: The theoretical validity of the estimators proposed and their efficiency relative to the underlying MCMC algorithms are established and the performance and limitations of the method are illustrated.
Journal ArticleDOI
Particle Filters and Data Assimilation
Paul Fearnhead,Hans R. Künsch +1 more
TL;DR: The challenges posed by models with high-dimensional states, joint estimation of parameters and the state, and inference for the history of the state process are discussed, including methods based on the particle filter and the ensemble Kalman filter.
Journal ArticleDOI
Smoothing with Couplings of Conditional Particle Filters
TL;DR: In this paper, an unbiased estimator of smoothing is proposed for state-space models with noisy measurements related to the process, and the estimator is shown to be unbiased.
Posted Content
Uniform Ergodicity of the Iterated Conditional SMC and Geometric Ergodicity of Particle Gibbs samplers
TL;DR: The essential boundedness of potential functions associated with the i-cSMC algorithm provide necessary and sufficient conditions for the uniform ergodicity and quantitative bounds on its geometric rate of convergence, which imply convergence of properties of the particle Gibbs Markov chain to those of its corresponding Gibbs sampler.
References
More filters
Book ChapterDOI
Probability Inequalities for sums of Bounded Random Variables
TL;DR: In this article, upper bounds for the probability that the sum S of n independent random variables exceeds its mean ES by a positive number nt are derived for certain sums of dependent random variables such as U statistics.
BookDOI
Sequential Monte Carlo methods in practice
TL;DR: This book presents the first comprehensive treatment of Monte Carlo techniques, including convergence results and applications to tracking, guidance, automated target recognition, aircraft navigation, robot navigation, econometrics, financial modeling, neural networks, optimal control, optimal filtering, communications, reinforcement learning, signal enhancement, model averaging and selection.
Book
Sequential Monte Carlo methods for dynamic systems
TL;DR: A general framework for using Monte Carlo methods in dynamic systems and a general use of Rao-Blackwellization is proposed to improve performance and to compare different Monte Carlo procedures.
Journal ArticleDOI
Particle Markov chain Monte Carlo methods
TL;DR: It is shown here how it is possible to build efficient high dimensional proposal distributions by using sequential Monte Carlo methods, which allows not only to improve over standard Markov chain Monte Carlo schemes but also to make Bayesian inference feasible for a large class of statistical models where this was not previously so.
Book
Inference in Hidden Markov Models
TL;DR: This book is a comprehensive treatment of inference for hidden Markov models, including both algorithms and statistical theory, and builds on recent developments to present a self-contained view.