scispace - formally typeset
Open AccessJournal ArticleDOI

Bayesian group bridge composite quantile regression

TLDR
In this article , a Bayesian regularized composite quantile regression (CQR) method with group bridge penalty is adopted to conduct covariate selection and estimation in CQR.
Abstract
Bayesian regularized composite quantile regression (CQR) method with group bridge penalty is adopted to conduct covariate selection and estimation in CQR. MCMC algorithm was improved for posterior inference employing a scale mixture of normal of the asymmetric Laplace distribution (ALD). The suggested algorithm uses priors for the coefficients of regression, which are scale mixtures of multivariate uniform distributions with a particular Gamma distribution as a mixing distribution. Simulation results and analyses of real data show that the suggested MCMC sampler has excellent mixing feature and outperforms the current approaches in terms of prediction accuracy and model selection.

read more

Content maybe subject to copyright    Report

References
More filters
Journal ArticleDOI

Regression Shrinkage and Selection via the Lasso

TL;DR: A new method for estimation in linear models called the lasso, which minimizes the residual sum of squares subject to the sum of the absolute value of the coefficients being less than a constant, is proposed.

Estimating the dimension of a model

TL;DR: In this paper, the problem of selecting one of a number of models of different dimensions is treated by finding its Bayes solution, and evaluating the leading terms of its asymptotic expansion.
Proceedings Article

Information Theory and an Extention of the Maximum Likelihood Principle

H. Akaike
TL;DR: The classical maximum likelihood principle can be considered to be a method of asymptotic realization of an optimum estimate with respect to a very general information theoretic criterion to provide answers to many practical problems of statistical model fitting.
Journal ArticleDOI

Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties

TL;DR: In this article, penalized likelihood approaches are proposed to handle variable selection problems, and it is shown that the newly proposed estimators perform as well as the oracle procedure in variable selection; namely, they work as well if the correct submodel were known.
Journal ArticleDOI

Model selection and estimation in regression with grouped variables

TL;DR: In this paper, instead of selecting factors by stepwise backward elimination, the authors focus on the accuracy of estimation and consider extensions of the lasso, the LARS algorithm and the non-negative garrotte for factor selection.
Related Papers (5)