scispace - formally typeset
Open AccessJournal ArticleDOI

False Discoveries Occur Early on the Lasso Path

Weijie J. Su, +2 more
- 01 Oct 2017 - 
- Vol. 45, Iss: 5, pp 2133-2150
Reads0
Chats0
TLDR
It is demonstrated that true features and null features are always interspersed on the Lasso path, and that this phenomenon occurs no matter how strong the effect sizes are.
Abstract
In regression settings where explanatory variables have very low correlations and there are relatively few effects, each of large magnitude, we expect the Lasso to find the important variables with few errors, if any. This paper shows that in a regime of linear sparsity—meaning that the fraction of variables with a nonvanishing effect tends to a constant, however small—this cannot really be the case, even when the design variables are stochastically independent. We demonstrate that true features and null features are always interspersed on the Lasso path, and that this phenomenon occurs no matter how strong the effect sizes are. We derive a sharp asymptotic trade-off between false and true positive rates or, equivalently, between measures of type I and type II errors along the Lasso path. This trade-off states that if we ever want to achieve a type II error (false negative rate) under a critical value, then anywhere on the Lasso path the type I error (false positive rate) will need to exceed a given threshold so that we can never have both errors at a low level at the same time. Our analysis uses tools from approximate message passing (AMP) theory as well as novel elements to deal with a possibly adaptive selection of the Lasso regularizing parameter.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

Sparse regression for plasma physics

TL;DR: In this article , the authors illustrate some of the important ways in which sparse regression appears in plasma physics and point out recent contributions and remaining challenges to solving these problems in this field, and a brief review is provided for the optimization problem and state-of-the-art solvers, especially for constrained and high-dimensional sparse regression.
Posted Content

DebiNet: Debiasing Linear Models with Nonlinear Overparameterized Neural Networks

TL;DR: This paper incorporates over-parameterized neural networks into semi-parametric models to bridge the gap between inference and prediction, especially in the high dimensional linear problem.
Journal ArticleDOI

Data-based autonomously discovering method for nonlinear aerodynamic force of quasi-flat plate

TL;DR: In this paper , a group sparse regression method is used to reveal the nonlinear mapping aerodynamics relationship between motion and force from data, and the aeroelastic force function discovered by this method balances modeling accuracy and simplicity.
Posted Content

The False Positive Control Lasso.

TL;DR: An existing model (the SQRT-Lasso) can be recast as a method of controlling the number of expected false positives, how a similar estimator can be used for all other generalized linear model classes, and this approach can be fit with existing fast Lasso optimization solvers.

A unified view of high-dimensional bridge regression

Haolei Weng
TL;DR: A unified view of high-dimensional bridge regression is presented that combines the results obtained in [Bouchut-Boyaval, M3AS (23) 2013] and [M2AS (24) 2013], which show clear trends in both the horizontal and vertical dimensions of the model.
References
More filters
Journal ArticleDOI

Regression Shrinkage and Selection via the Lasso

TL;DR: A new method for estimation in linear models called the lasso, which minimizes the residual sum of squares subject to the sum of the absolute value of the coefficients being less than a constant, is proposed.
Journal ArticleDOI

Regularization and variable selection via the elastic net

TL;DR: It is shown that the elastic net often outperforms the lasso, while enjoying a similar sparsity of representation, and an algorithm called LARS‐EN is proposed for computing elastic net regularization paths efficiently, much like algorithm LARS does for the lamba.
Journal ArticleDOI

Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties

TL;DR: In this article, penalized likelihood approaches are proposed to handle variable selection problems, and it is shown that the newly proposed estimators perform as well as the oracle procedure in variable selection; namely, they work as well if the correct submodel were known.
Journal ArticleDOI

Model selection and estimation in regression with grouped variables

TL;DR: In this paper, instead of selecting factors by stepwise backward elimination, the authors focus on the accuracy of estimation and consider extensions of the lasso, the LARS algorithm and the non-negative garrotte for factor selection.
Related Papers (5)