scispace - formally typeset
Open AccessJournal ArticleDOI

New efficient estimation and variable selection methods for semiparametric varying-coefficient partially linear models

Reads0
Chats0
TLDR
This work proposes adaptive penalization methods for variable selection in the semiparametric varying-coefficient partially linear model and proves that the methods possess the oracle property.
Abstract
The complexity of semiparametric models poses new challenges to statistical inference and model selection that frequently arise from real applications In this work, we propose new estimation and variable selection procedures for the semiparametric varying-coefficient partially linear model We first study quantile regression estimates for the nonparametric varying-coefficient functions and the parametric regression coefficients To achieve nice efficiency properties, we further develop a semiparametric composite quantile regression procedure We establish the asymptotic normality of proposed estimators for both the parametric and nonparametric parts and show that the estimators achieve the best convergence rate Moreover, we show that the proposed method is much more efficient than the least-squares-based method for many non-normal errors and that it only loses a small amount of efficiency for normal errors In addition, it is shown that the loss in efficiency is at most 111% for estimating varying coefficient functions and is no greater than 136% for estimating parametric components To achieve sparsity with high-dimensional covariates, we propose adaptive penalization methods for variable selection in the semiparametric varying-coefficient partially linear model and prove that the methods possess the oracle property Extensive Monte Carlo simulation studies are conducted to examine the finite-sample performance of the proposed procedures Finally, we apply the new methods to analyze the plasma beta-carotene level data

read more

Citations
More filters
Journal ArticleDOI

Quantile Regression for Analyzing Heterogeneity in Ultra-High Dimension

TL;DR: A novel, sufficient optimality condition that relies on a convex differencing representation of the penalized loss function and the subdifferential calculus is introduced that enables the oracle property for sparse quantile regression in the ultra-high dimension under relaxed conditions.
Journal ArticleDOI

Robust Variable Selection with Exponential Squared Loss.

TL;DR: This article proposes a class of penalized robust regression estimators based on exponential squared loss that can achieve the highest asymptotic breakdown point of 1/2 and shows that their influence functions are bounded with respect to the outliers in either the response or the covariate domain.
Journal ArticleDOI

Varying Coefficient Regression Models: A Review and New Developments

TL;DR: In this article, the authors give an overview on the existing methodological and theoretical developments for varying coefficient regression models and discuss their extensions with some new developments, which enable us to use different amount of smoothing for estimating different component functions in the models, and are for a flexible form of varying coefficient models that requires smoothing across different covariates' spaces.
Journal ArticleDOI

Quantile correlations and quantile autoregressive modeling

TL;DR: In this article, quantile autocorrelation function (QACF) and quantile partial correlation (QPACF) were proposed to identify the autoregressive order of a model.
Posted Content

Quantile correlations and quantile autoregressive modeling

TL;DR: In this paper, quantile autocorrelation function (QACF) and quantile partial autocorecorrelation functions (QPACF) are proposed to estimate the autoregressive order of quantile models.
References
More filters
Journal ArticleDOI

Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties

TL;DR: In this article, penalized likelihood approaches are proposed to handle variable selection problems, and it is shown that the newly proposed estimators perform as well as the oracle procedure in variable selection; namely, they work as well if the correct submodel were known.
Book

Local polynomial modelling and its applications

TL;DR: Applications of Local Polynomial Modeling in Nonlinear Time Series and Automatic Determination of Model Complexity and Framework for Local polynomial regression.
Journal ArticleDOI

Asymptotics for lasso-type estimators

TL;DR: In this paper, the authors consider the asymptotic behavior of regression estimators that minimize the residual sum of squares plus a penalty proportional to the value of the parameter ε > 0, and show that the limiting distributions can have positive probability mass at 0 under appropriate conditions.
Journal ArticleDOI

One-step Sparse Estimates in Nonconcave Penalized Likelihood Models.

TL;DR: A new unified algorithm based on the local linear approximation for maximizing the penalized likelihood for a broad class of concave penalty functions and shows that if the regularization parameter is appropriately chosen, the one-step LLA estimates enjoy the oracle properties with good initial estimators.
Journal ArticleDOI

Generalized Partially Linear Single-Index Models

TL;DR: The generalized partially linear single-index model (GPLSIM) as discussed by the authors is a nonparametric generalized linear model for regression of a response Y on predictors (X, Z) with conditional mean function based on a linear combination of X, Z, where η 0(·) is an unknown function.
Related Papers (5)