scispace - formally typeset
Open AccessJournal ArticleDOI

Tuning parameter selection in high dimensional penalized likelihood

Reads0
Chats0
TLDR
In this article, the authors proposed to select the tuning parameter by optimizing the generalized information criterion with an appropriate model complexity penalty, which diverges at the rate of some power of ǫ(p) depending on the tail probability behavior of the response variables.
Abstract
Summary Determining how to select the tuning parameter appropriately is essential in penalized likelihood methods for high dimensional data analysis. We examine this problem in the setting of penalized likelihood methods for generalized linear models, where the dimensionality of covariates p is allowed to increase exponentially with the sample size n. We propose to select the tuning parameter by optimizing the generalized information criterion with an appropriate model complexity penalty. To ensure that we consistently identify the true model, a range for the model complexity penalty is identified in the generlized information criterion. We find that this model complexity penalty should diverge at the rate of some power of  log (p) depending on the tail probability behaviour of the response variables. This reveals that using the Akaike information criterion or Bayes information criterion to select the tuning parameter may not be adequate for consistently identifying the true model. On the basis of our theoretical study, we propose a uniform choice of the model complexity penalty and show that the approach proposed consistently identifies the true model among candidate models with asymptotic probability 1. We justify the performance of the procedure proposed by numerical simulations and a gene expression data analysis.

read more

Citations
More filters
Journal ArticleDOI

From big data analysis to personalized medicine for all: challenges and opportunities

TL;DR: This review provides an update of important developments in the analysis of big data and forward strategies to accelerate the global transition to personalized medicine.
Journal ArticleDOI

Variable selection in regression with compositional covariates

TL;DR: An l1 regularization method for the linear log-contrast model that respects the unique features of compositional data is proposed and its usefulness is illustrated by an application to a microbiome study relating human body mass index to gut microbiome composition.
Repository

Forecasting: theory and practice

Fotios Petropoulos, +84 more
- 04 Dec 2020 - 
TL;DR: A non-systematic review of the theory and the practice of forecasting, offering a wide range of theoretical, state-of-the-art models, methods, principles, and approaches to prepare, produce, organise, and evaluate forecasts.
Journal ArticleDOI

Forecasting: theory and practice

TL;DR: In this paper , the authors provide an overview of a wide range of theoretical, state-of-the-art models, methods, principles, and approaches to prepare, produce, organize, and evaluate forecasts.
Journal ArticleDOI

Model Selection for High-Dimensional Quadratic Regression via Regularization

TL;DR: Wang et al. as mentioned in this paper proposed two-stage regularization methods for model selection in high-dimensional quadratic regression (QR) models, which maintain the hierarchical model structure between main effects and interaction effects.
References
More filters
Journal ArticleDOI

One-step Sparse Estimates in Nonconcave Penalized Likelihood Models.

TL;DR: A new unified algorithm based on the local linear approximation for maximizing the penalized likelihood for a broad class of concave penalty functions and shows that if the regularization parameter is appropriately chosen, the one-step LLA estimates enjoy the oracle properties with good initial estimators.
Journal ArticleDOI

Risk Reduction in Large Portfolios: Why Imposing the Wrong Constraints Helps

TL;DR: In this paper, the authors explain why constraining portfolio weights to be nonnegative can reduce the risk in estimated optimal portfolios even when the constraints are wrong, and they reconcile this apparent contradiction.
Journal Article

A Selective Overview of Variable Selection in High Dimensional Feature Space.

TL;DR: In this paper, a brief account of the recent developments of theory, methods, and implementations for high-dimensional variable selection is presented, with emphasis on independence screening and two-scale methods.
Journal ArticleDOI

Tuning parameter selectors for the smoothly clipped absolute deviation method.

TL;DR: This work shows that the commonly used the generalised crossvalidation cannot select the tuning parameter satisfactorily, with a nonignorable overfitting effect in the resulting model, and proposes a bic tuning parameter selector, which is shown to be able to identify the true model consistently.
Journal ArticleDOI

The sparsity and bias of the Lasso selection in high-dimensional linear regression

TL;DR: This article showed that the LASSO selects a model of the correct order of dimensionality, controls the bias of the selected model at a level determined by the contributions of small regression coefficients and threshold bias, and selects all coefficients of greater order than the bias.