scispace - formally typeset
Open AccessJournal ArticleDOI

Penalized Composite Quasi-Likelihood for Ultrahigh-Dimensional Variable Selection

Reads0
Chats0
TLDR
A data‐driven weighted linear combination of convex loss functions, together with weighted L1‐penalty is proposed and established a strong oracle property of the method proposed that has both the model selection consistency and estimation efficiency for the true non‐zero coefficients.
Abstract
In high-dimensional model selection problems, penalized least-square approaches have been extensively used. This paper addresses the question of both robustness and efficiency of penalized model selection methods, and proposes a data-driven weighted linear combination of convex loss functions, together with weighted L1-penalty. It is completely data-adaptive and does not require prior knowledge of the error distribution. The weighted L1-penalty is used both to ensure the convexity of the penalty term and to ameliorate the bias caused by the L1-penalty. In the setting with dimensionality much larger than the sample size, we establish a strong oracle property of the proposed method that possesses both the model selection consistency and estimation efficiency for the true non-zero coefficients. As specific examples, we introduce a robust method of composite L1-L2, and optimal composite quantile method and evaluate their performance in both simulated and real data examples.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal Article

A Selective Overview of Variable Selection in High Dimensional Feature Space.

TL;DR: In this paper, a brief account of the recent developments of theory, methods, and implementations for high-dimensional variable selection is presented, with emphasis on independence screening and two-scale methods.
Journal ArticleDOI

New efficient estimation and variable selection methods for semiparametric varying-coefficient partially linear models

TL;DR: This work proposes adaptive penalization methods for variable selection in the semiparametric varying-coefficient partially linear model and proves that the methods possess the oracle property.
Journal ArticleDOI

Sparse High-Dimensional Models in Economics

TL;DR: This paper reviews the literature on sparse high dimensional models and discusses some applications in economics and finance, including variable selection methods that are proved to be effective in high dimensional sparse modeling.
Journal ArticleDOI

Estimating False Discovery Proportion Under Arbitrary Covariance Dependence

TL;DR: In this article, a principal factor approximation (PFA) based method was proposed to solve the problem of false discovery control in large-scale multiple hypothesis testing, where a common threshold is used and a consistent estimate of realized FDP is provided.
Posted Content

Estimating False Discovery Proportion Under Arbitrary Covariance Dependence

TL;DR: An approximate expression for false discovery proportion (FDP) in large-scale multiple testing when a common threshold is used and a consistent estimate of realized FDP is provided, which has important applications in controlling false discovery rate and FDP.
References
More filters
Journal ArticleDOI

Asymptotic Behavior of $M$-Estimators of $p$ Regression Parameters when $p^2/n$ is Large. I. Consistency

TL;DR: In this article, it was shown that the consistency and asymptotic normality of a linear model with arbitrary linear combinations (i.e., a linear combination of two linear combinations) can be maintained in the presence of arbitrary growth constraints.
Journal ArticleDOI

Asymptotic Behavior of $M$ Estimators of $p$ Regression Parameters when $p^2 / n$ is Large; II. Normal Approximation

TL;DR: In this article, a uniform normal approximation for the distribution of the estimator of β is given, under which arbitrary linear combinations of β are asymptotically normal (when appropriately normalized).
Journal ArticleDOI

On the non-negative garrotte estimator

TL;DR: In this article, the non-negative garrotte estimator can be used in combination with estimators other than the original least squares estimator as in its original form, such as the lasso, the elastic net and ridge regression along with ordinary least squares as the initial estimate.
Journal ArticleDOI

L1-Norm Quantile Regression

TL;DR: In this article, the LASSO regularized quantile regression (L1-norm QR) model is proposed, which uses the sum of the absolute values of the coefficients as the penalty.
Journal Article

Nonlinear Models Using Dirichlet Process Mixtures

TL;DR: In this article, the Dirichlet process mixtures are used to model the joint distribution of response variable, y, and covariates, x, non-parametrically using Dirichlets.
Related Papers (5)