scispace - formally typeset
Open AccessJournal ArticleDOI

Penalized Composite Quasi-Likelihood for Ultrahigh-Dimensional Variable Selection

TLDR
A data‐driven weighted linear combination of convex loss functions, together with weighted L1‐penalty is proposed and established a strong oracle property of the method proposed that has both the model selection consistency and estimation efficiency for the true non‐zero coefficients.
Abstract
In high-dimensional model selection problems, penalized least-square approaches have been extensively used. This paper addresses the question of both robustness and efficiency of penalized model selection methods, and proposes a data-driven weighted linear combination of convex loss functions, together with weighted L1-penalty. It is completely data-adaptive and does not require prior knowledge of the error distribution. The weighted L1-penalty is used both to ensure the convexity of the penalty term and to ameliorate the bias caused by the L1-penalty. In the setting with dimensionality much larger than the sample size, we establish a strong oracle property of the proposed method that possesses both the model selection consistency and estimation efficiency for the true non-zero coefficients. As specific examples, we introduce a robust method of composite L1-L2, and optimal composite quantile method and evaluate their performance in both simulated and real data examples.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

The nonparametric Box–Cox model for high-dimensional regression analysis

H. Zhou, +1 more
TL;DR: In this article , the authors proposed a new high-dimensional regression method based on a nonparametric Box-Cox model with an unspecified monotone transformation function, and a two-step method is proposed for the estimation of this model in highdimensional settings.
Dissertation

Copula-based spatio-temporal modelling for count data

Pu Xue Qiao
TL;DR: In this article, a Gaussian copula regression model (copSTM) is proposed for the analysis of multivariate spatio-temporal data on lattice, where temporal effects are modelled through the conditional marginal expectations of the response variables using an observation-driven time series model.

Estimating false discovery proportion under covariance dependence

Weijie Gu
TL;DR: This work derives the theoretical distribution for false discovery proportion (FDP) in large scale multiple testing when a common threshold is used and provides a consistent FDP and proposes a factor-adjusted procedure, which is shown in simulation studies to be more powerful than the fixed threshold procedure.

J ul 2 02 2 Asymptotic Uncertainty of False Discovery Proportion

Meng Mei, +1 more
TL;DR: In this article , the authors derived the asymptotic expansion of FDP under mild regularity conditions and examined how the variance of the FDP varies under different dependence structures both theoretically and numerically.
Journal ArticleDOI

Automatic bias correction for testing in high‐dimensional linear models

TL;DR: In this paper , the robust approximate message passing algorithm is used to cope with non-Gaussian distributed regression errors, and the proposed framework enjoys an automatically built-in bias correction and is applicable with general convex nondifferentiable loss functions which also allows inference when the focus is a conditional quantile instead of the mean of the response.
References
More filters
Journal ArticleDOI

Regression Shrinkage and Selection via the Lasso

TL;DR: A new method for estimation in linear models called the lasso, which minimizes the residual sum of squares subject to the sum of the absolute value of the coefficients being less than a constant, is proposed.
Journal ArticleDOI

Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties

TL;DR: In this article, penalized likelihood approaches are proposed to handle variable selection problems, and it is shown that the newly proposed estimators perform as well as the oracle procedure in variable selection; namely, they work as well if the correct submodel were known.
Journal ArticleDOI

The adaptive lasso and its oracle properties

TL;DR: A new version of the lasso is proposed, called the adaptive lasso, where adaptive weights are used for penalizing different coefficients in the ℓ1 penalty, and the nonnegative garotte is shown to be consistent for variable selection.
Journal ArticleDOI

Robust Estimation of a Location Parameter

TL;DR: In this article, a new approach toward a theory of robust estimation is presented, which treats in detail the asymptotic theory of estimating a location parameter for contaminated normal distributions, and exhibits estimators that are asyptotically most robust (in a sense to be specified) among all translation invariant estimators.
Related Papers (5)