scispace - formally typeset
Journal ArticleDOI

Variable Selection With the Strong Heredity Constraint and Its Oracle Property

Nam Hee Choi, +2 more
- 01 Mar 2010 - 
- Vol. 105, Iss: 489, pp 354-364
Reads0
Chats0
TLDR
Numerical results indicate that the LASSO method tends to remove irrelevant variables more effectively and provide better prediction performance than previous work and automatically enforces the heredity constraint.
Abstract
In this paper, we extend the LASSO method (Tibshirani 1996) for simultaneously fitting a regression model and identifying important interaction terms. Unlike most of the existing variable selection methods, our method automatically enforces the heredity constraint, that is, an interaction term can be included in the model only if the corresponding main terms are also included in the model. Furthermore, we extend our method to generalized linear models, and show that it performs as well as if the true model were given in advance, that is, the oracle property as in Fan and Li (2001) and Fan and Peng (2004). The proof of the oracle property is given in online supplemental materials. Numerical results on both simulation data and real data indicate that our method tends to remove irrelevant variables more effectively and provide better prediction performance than previous work (Yuan, Joseph, and Lin 2007 and Zhao, Rocha, and Yu 2009 as well as the classical LASSO method).

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

A lasso for hierarchical interactions

TL;DR: A precise characterization of the effect of this hierarchy constraint is given, a bound on this estimate reveals the amount of fitting "saved" by the hierarchy constraint, and it is proved that hierarchy holds with probability one.
Journal ArticleDOI

Variable Selection Using Adaptive Nonlinear Interaction Structures in High Dimensions

TL;DR: This work introduces a new approach, “Variable selection using Adaptive Nonlinear Interaction Structures in High dimensions” (VANISH), that is based on a penalized least squares criterion and is designed for high dimensional nonlinear problems and suggests that VANISH should outperform certain natural competitors when the true interaction structure is sufficiently sparse.
Journal ArticleDOI

Structured variable selection and estimation

TL;DR: This paper proposes non-negative garrote methods that can naturally incorporate such relationships defined through effect heredity principles or marginality principles, and shows that the methods are very easy to compute and enjoy nice theoretical properties.
Journal ArticleDOI

Interaction Screening for Ultrahigh-Dimensional Data

TL;DR: Theoretically, the iFOR algorithms prove that they possess sure screening property for ultrahigh-dimensional settings, and are proposed to tackle forward-selection-based procedures called iFOR, which identify interaction effects in a greedy forward fashion while maintaining the natural hierarchical model structure.
References
More filters
Journal ArticleDOI

Regression Shrinkage and Selection via the Lasso

TL;DR: A new method for estimation in linear models called the lasso, which minimizes the residual sum of squares subject to the sum of the absolute value of the coefficients being less than a constant, is proposed.
Book

Generalized Linear Models

TL;DR: In this paper, a generalization of the analysis of variance is given for these models using log- likelihoods, illustrated by examples relating to four distributions; the Normal, Binomial (probit analysis, etc.), Poisson (contingency tables), and gamma (variance components).
Journal ArticleDOI

Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties

TL;DR: In this article, penalized likelihood approaches are proposed to handle variable selection problems, and it is shown that the newly proposed estimators perform as well as the oracle procedure in variable selection; namely, they work as well if the correct submodel were known.
Journal ArticleDOI

The adaptive lasso and its oracle properties

TL;DR: A new version of the lasso is proposed, called the adaptive lasso, where adaptive weights are used for penalizing different coefficients in the ℓ1 penalty, and the nonnegative garotte is shown to be consistent for variable selection.
Related Papers (5)