scispace - formally typeset
Open AccessJournal Article

Approaches for bayesian variable selection

TLDR
The authors compare various hierarchical mixture prior formulations of variable selection uncertainty in normal linear regression models, including the nonconjugate SSVS formulation of George and McCulloch (1993), as well as conjugate formulations which allow for analytical simplification.
Abstract
This paper describes and compares various hierarchical mixture prior formulations of variable selection uncertainty in normal linear regression models. These include the nonconjugate SSVS formulation of George and McCulloch (1993), as well as conjugate formulations which allow for analytical simplification. Hyperpa- rameter settings which base selection on practical significance, and the implications of using mixtures with point priors are discussed. Computational methods for pos- terior evaluation and exploration are considered. Rapid updating methods are seen to provide feasible methods for exhaustive evaluation using Gray Code sequencing in moderately sized problems, and fast Markov Chain Monte Carlo exploration in large problems. Estimation of normalization constants is seen to provide improved posterior estimates of individual model probabilities and the total visited probabil- ity. Various procedures are illustrated on simulated sample problems and on a real problem concerning the construction of financial index tracking portfolios.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

Bayesian Model Averaging: A Tutorial

TL;DR: Bayesian model averaging (BMA) provides a coherent mechanism for ac- counting for this model uncertainty and provides improved out-of- sample predictive performance.
Journal ArticleDOI

The Bayesian Lasso

TL;DR: The Lasso estimate for linear regression parameters can be interpreted as a Bayesian posterior mode estimate when the regression parameters have independent Laplace (i.e., double-exponential) priors.
Journal ArticleDOI

Bayesian Compressive Sensing

TL;DR: The underlying theory, an associated algorithm, example results, and comparisons to other compressive-sensing inversion algorithms in the literature are presented.
Journal ArticleDOI

Sure independence screening for ultrahigh dimensional feature space

TL;DR: In this article, the authors introduce the concept of sure screening and propose a sure screening method that is based on correlation learning, called sure independence screening, to reduce dimensionality from high to a moderate scale that is below the sample size.
Posted Content

Sure Independence Screening for Ultra-High Dimensional Feature Space

TL;DR: The concept of sure screening is introduced and a sure screening method that is based on correlation learning, called sure independence screening, is proposed to reduce dimensionality from high to a moderate scale that is below the sample size.
References
More filters
Journal ArticleDOI

Reversible jump Markov chain Monte Carlo computation and Bayesian model determination

Peter H.R. Green
- 01 Dec 1995 - 
TL;DR: In this article, the authors propose a new framework for the construction of reversible Markov chain samplers that jump between parameter subspaces of differing dimensionality, which is flexible and entirely constructive.
Journal ArticleDOI

Understanding the Metropolis-Hastings Algorithm

TL;DR: A detailed, introductory exposition of the Metropolis-Hastings algorithm, a powerful Markov chain method to simulate multivariate distributions, and a simple, intuitive derivation of this method is given along with guidance on implementation.
Journal ArticleDOI

Markov Chains for Exploring Posterior Distributions

Luke Tierney
- 01 Dec 1994 - 
TL;DR: Several Markov chain methods are available for sampling from a posterior distribution as discussed by the authors, including Gibbs sampler and Metropolis algorithm, and several strategies for constructing hybrid algorithms, which can be used to guide the construction of more efficient algorithms.
Journal ArticleDOI

Variable selection via Gibbs sampling

TL;DR: In this paper, the Gibbs sampler is used to indirectly sample from the multinomial posterior distribution on the set of possible subset choices to identify the promising subsets by their more frequent appearance in the Gibbs sample.

Evaluating the accuracy of sampling-based approaches to the calculation of posterior moments

TL;DR: In this paper, spectral analysis methods from spectral analysis are used to evaluate numerical accuracy formally and construct diagnostics for convergence in the normal linear model with informative priors, and in the Tobit-censored regression model.
Related Papers (5)