scispace - formally typeset
Open AccessPosted Content

Dimension-free Mixing for High-dimensional Bayesian Variable Selection.

TLDR
Yang et al. as mentioned in this paper proved that the MCMC sampler achieves a much faster mixing time that is independent of the number of covariates, under the same assumptions, which is the first high-dimensional result which rigorously shows that the mixing rate of informed MCMC methods can be fast enough to offset the computational cost of local posterior evaluation.
Abstract
Yang et al. (2016) proved that the symmetric random walk Metropolis--Hastings algorithm for Bayesian variable selection is rapidly mixing under mild high-dimensional assumptions. We propose a novel MCMC sampler using an informed proposal scheme, which we prove achieves a much faster mixing time that is independent of the number of covariates, under the same assumptions. To the best of our knowledge, this is the first high-dimensional result which rigorously shows that the mixing rate of informed MCMC methods can be fast enough to offset the computational cost of local posterior evaluation. Motivated by the theoretical analysis of our sampler, we further propose a new approach called "two-stage drift condition" to studying convergence rates of Markov chains on general state spaces, which can be useful for obtaining tight complexity bounds in high-dimensional settings. The practical advantages of our algorithm are illustrated by both simulation studies and real data analysis.

read more

Citations
More filters
Posted Content

Adaptive random neighbourhood informed Markov chain Monte Carlo for high-dimensional Bayesian variable Selection

TL;DR: In this article, the authors introduce a framework for efficient Markov Chain Monte Carlo (MCMC) algorithms targeting discrete-valued high-dimensional distributions, such as posterior distributions in Bayesian variable selection (BVS) problems.
Posted Content

On the Scalability of Informed Importance Tempering

Quan Zhou
- 22 Jul 2021 - 
TL;DR: In this article, a class of MCMC schemes called informed importance tempering (IIT) was proposed, which combine importance sampling and informed local proposals, and Spectral gap bounds for IIT estimators were obtained, demonstrating the remarkable scalability of IIT samplers for unimodal target distributions.
References
More filters
Journal ArticleDOI

Variable selection via Gibbs sampling

TL;DR: In this paper, the Gibbs sampler is used to indirectly sample from the multinomial posterior distribution on the set of possible subset choices to identify the promising subsets by their more frequent appearance in the Gibbs sample.
Journal ArticleDOI

Sure independence screening for ultrahigh dimensional feature space

TL;DR: In this article, the authors introduce the concept of sure screening and propose a sure screening method that is based on correlation learning, called sure independence screening, to reduce dimensionality from high to a moderate scale that is below the sample size.
Book

Lectures on the Coupling Method

TL;DR: In this article, the authors propose Discrete Theory Continuous Theory, Inequalities Intensity-Governed Processes Diffusions Appendix Frequently Used Notation References Index, Section 5.
Journal Article

Approaches for bayesian variable selection

TL;DR: The authors compare various hierarchical mixture prior formulations of variable selection uncertainty in normal linear regression models, including the nonconjugate SSVS formulation of George and McCulloch (1993), as well as conjugate formulations which allow for analytical simplification.
Journal ArticleDOI

Geometric Bounds for Eigenvalues of Markov Chains

TL;DR: In this article, the second largest eigenvalue and spectral gap of a reversible Markov chain were derived for the random walk associated to approximate computation of the permanent. But these bounds depend on geometric quantities such as the maximum degree, diameter and covering number of associated graphs.
Related Papers (5)