scispace - formally typeset
Open AccessJournal ArticleDOI

Empirical Bayes vs. Fully Bayes Variable Selection

TLDR
In this article, the potential of alternative fully-Bayes methods, which instead margin out the hyperparameters with respect to prior distributions, is studied, and several structured prior formulations are considered for which fully Bayes selection and estimation methods are obtained.
About
This article is published in Journal of Statistical Planning and Inference.The article was published on 2008-04-01 and is currently open access. It has received 146 citations till now. The article focuses on the topics: g-prior & Bayes factor.

read more

Citations
More filters
Journal ArticleDOI

Mixtures of g Priors for Bayesian Variable Selection

TL;DR: In this paper, a mixture of g priors is proposed as an alternative to the default g prior, which resolves many of the problems with the original formulation while maintaining the computational tractability of the g prior.
Journal ArticleDOI

Bayes and empirical-Bayes multiplicity adjustment in the variable-selection problem

TL;DR: In this article, the multiplicity-correction effect of standard Bayesian variable-selection priors in linear regression is investigated, and empirical and fully-Bayes approaches to variable selection through examples, theoretical results and simulations are compared.
Journal ArticleDOI

EMVS: The EM Approach to Bayesian Variable Selection

TL;DR: EMVS is proposed, a deterministic alternative to stochastic search based on an EM algorithm which exploits a conjugate mixture prior formulation to quickly find posterior modes in high-dimensional linear regression contexts.
Journal ArticleDOI

Criteria for Bayesian model choice with application to variable selection

TL;DR: In this paper, the authors formalize the most general and compelling of the various criteria that have been suggested, together with a new criterion, and illustrate the potential of these criteria in determining objective model selection priors by considering their application to the problem of variable selection.
Posted Content

Benchmark Priors Revisited: On Adaptive Shrinkage and the Supermodel Effect in Bayesian Model Averaging

TL;DR: The authors proposed a hyper-g prior, whose data-dependent shrinkage adapts posterior model distributions to data quality, and applied it to determinants of economic growth to identify several covariates whose robustness differs from previous results.
References
More filters
Journal ArticleDOI

Estimating the Dimension of a Model

TL;DR: In this paper, the problem of selecting one of a number of models of different dimensions is treated by finding its Bayes solution, and evaluating the leading terms of its asymptotic expansion.

Estimating the dimension of a model

TL;DR: In this paper, the problem of selecting one of a number of models of different dimensions is treated by finding its Bayes solution, and evaluating the leading terms of its asymptotic expansion.
Proceedings Article

Information Theory and an Extention of the Maximum Likelihood Principle

H. Akaike
TL;DR: The classical maximum likelihood principle can be considered to be a method of asymptotic realization of an optimum estimate with respect to a very general information theoretic criterion to provide answers to many practical problems of statistical model fitting.
Book ChapterDOI

Information Theory and an Extension of the Maximum Likelihood Principle

TL;DR: In this paper, it is shown that the classical maximum likelihood principle can be considered to be a method of asymptotic realization of an optimum estimate with respect to a very general information theoretic criterion.
Book

Theory of probability

TL;DR: In this paper, the authors introduce the concept of direct probabilities, approximate methods and simplifications, and significant importance tests for various complications, including one new parameter, and various complications for frequency definitions and direct methods.
Related Papers (5)
Frequently Asked Questions (9)
Q1. What are the contributions mentioned in the paper "Empirical bayes vs. fully bayes variable selection" ?

In this paper, the authors study the potential of alternative fully Bayes methods, which instead margin out the hyperparameters with respect to prior distributions. Analytical and simulation comparisons with empirical Bayes counterparts are studied. 

Letting = 1, . . . , 2p index the subsets of X1, . . . , Xp, and letting X be the n × q design matrix corresponding to the th subset, this corresponds to uncertainty about which is the appropriate subset modelY = X + . (2) A common strategy under such variable selection uncertainty has been to select the th model which maximizes apenalized regression sum of squares criterion of the formSS /̂ 2 − F − q . 

From a decision theoretic point of view, the appropriate loss for such selection rules is the 0–1 loss function which is 0 if and only if the correct model is selected. 

The variable selection problem arises when there is uncertainty about which, if any, of the explanatory variables should be dropped from the model. 

By suitable choices of c and , the posterior mode can be calibrated to correspond to traditional fixed penalty criteria such as AIC/Cp, BIC or RIC, respectively. 

Note that U is the limiting distribution of wa,wb as wa = wb → ∞.Turning to c, the authors note that (1+c) serves as a scale parameter in the marginal distribution of f (y | c, ). 

using such a loss function for simulation is problematic because getting the model exactly right is a very small probability event when so many models are being compared. 

For this purpose, the authors recommend setting b = 0 and using(c) = (1 + c)−(1+ ) for c ∈ (0, ∞) (19) with = 1 as the default prior on c. 

In such cases, heuristics such as stepwise methods or stochastic search might be used to restrict attention to a manageable set of models.