scispace - formally typeset
Journal ArticleDOI

Prior elicitation, variable selection and Bayesian computation for logistic regression models

TLDR
This work proposes an informative prior distribution for variable selection and proposes novel methods for computing the marginal distribution of the data for the logistic regression model.
Abstract
Summary. Bayesian selection of variables is often difficult to carry out because of the challenge in specifying prior distributions for the regression parameters for all possible models, specifying a prior distribution on the model space and computations. We address these three issues for the logistic regression model. For the first, we propose an informative prior distribution for variable selection. Several theoretical and computational properties of the prior are derived and illustrated with several examples. For the second, we propose a method for specifying an informative prior on the model space, and for the third we propose novel methods for computing the marginal distribution of the data. The new computational algorithms only require Gibbs samples from the full model to facilitate the computation of the prior and posterior model probabilities for all possible models. Several properties of the algorithms are also derived. The prior specification for the first challenge focuses on the observables in that the elicitation is based on a prior prediction yo for the response vector and a quantity ao quantifying the uncertainty in yo. Then, yo and ao are used to specify a prior for the regression coefficients semi-automatically. Examples using real data are given to demonstrate the methodology.

read more

Citations
More filters
Journal ArticleDOI

Global land cover mapping from MODIS: algorithms and early results

TL;DR: This product provides maps of global land cover at 1-km spatial resolution using several classification systems, principally that of the IGBP, and a supervised classification methodology is used that exploits a global database of training sites interpreted from high-resolution imagery in association with ancillary data.
Journal ArticleDOI

Statistical Methods for Eliciting Probability Distributions

TL;DR: Elicitation is a key task for subjectivist Bayesians as mentioned in this paper, and it brings statisticians closer to their clients and subject-matter expert colleagues, and thus brings them closer to themselves.
Journal ArticleDOI

Power prior distributions for regression models

TL;DR: In this paper, the authors propose a general class of prior distributions for arbitrary regression models, called power prior distributions, which are based on the idea of raising the likelihood function of the historical data to the power ao, where 0 < ao < 1.
Journal ArticleDOI

Missing-data methods for generalized linear models: A comparative review

TL;DR: This work examines data that are missing at random and nonignorable missing, and compares four common approaches for inference in generalized linear models with missing covariate data: maximum likelihood (ML), multiple imputation (MI), fully Bayesian (FB), and weighted estimating equations (WEEs).
Book ChapterDOI

The Practical Implementation of Bayesian Model Selection

TL;DR: This article illustrates some of the fundamental practical issues that arise for two different model selection problems: the variable selection problem for the linear model and the CART model selection problem.
References
More filters
Journal ArticleDOI

Estimating the Dimension of a Model

TL;DR: In this paper, the problem of selecting one of a number of models of different dimensions is treated by finding its Bayes solution, and evaluating the leading terms of its asymptotic expansion.

Estimating the dimension of a model

TL;DR: In this paper, the problem of selecting one of a number of models of different dimensions is treated by finding its Bayes solution, and evaluating the leading terms of its asymptotic expansion.
Proceedings Article

Information Theory and an Extention of the Maximum Likelihood Principle

H. Akaike
TL;DR: The classical maximum likelihood principle can be considered to be a method of asymptotic realization of an optimum estimate with respect to a very general information theoretic criterion to provide answers to many practical problems of statistical model fitting.
BookDOI

Density estimation for statistics and data analysis

TL;DR: The Kernel Method for Multivariate Data: Three Important Methods and Density Estimation in Action.
Book ChapterDOI

Information Theory and an Extension of the Maximum Likelihood Principle

TL;DR: In this paper, it is shown that the classical maximum likelihood principle can be considered to be a method of asymptotic realization of an optimum estimate with respect to a very general information theoretic criterion.
Related Papers (5)