scispace - formally typeset
Open AccessJournal ArticleDOI

False Discoveries Occur Early on the Lasso Path

Weijie J. Su, +2 more
- 01 Oct 2017 - 
- Vol. 45, Iss: 5, pp 2133-2150
Reads0
Chats0
TLDR
It is demonstrated that true features and null features are always interspersed on the Lasso path, and that this phenomenon occurs no matter how strong the effect sizes are.
Abstract
In regression settings where explanatory variables have very low correlations and there are relatively few effects, each of large magnitude, we expect the Lasso to find the important variables with few errors, if any. This paper shows that in a regime of linear sparsity—meaning that the fraction of variables with a nonvanishing effect tends to a constant, however small—this cannot really be the case, even when the design variables are stochastically independent. We demonstrate that true features and null features are always interspersed on the Lasso path, and that this phenomenon occurs no matter how strong the effect sizes are. We derive a sharp asymptotic trade-off between false and true positive rates or, equivalently, between measures of type I and type II errors along the Lasso path. This trade-off states that if we ever want to achieve a type II error (false negative rate) under a critical value, then anywhere on the Lasso path the type I error (false positive rate) will need to exceed a given threshold so that we can never have both errors at a low level at the same time. Our analysis uses tools from approximate message passing (AMP) theory as well as novel elements to deal with a possibly adaptive selection of the Lasso regularizing parameter.

read more

Content maybe subject to copyright    Report

Citations
More filters
Posted Content

Subset Selection with Shrinkage: Sparse Linear Modeling when the SNR is low

TL;DR: This work proposes a close cousin of best-subsets, namely, its $\ell_{q}$-regularized version, forq, which mitigates, to a large extent, the poor predictive performance ofbest-subset selection procedure in the low SNR regimes and performs favorably and generally delivers a substantially sparser model when compared to the best predictive models available via ridge regression and the Lasso.
Journal ArticleDOI

Discovery of Physics From Data: Universal Laws and Discrepancies.

TL;DR: It is shown that measurement noise and complex secondary physical mechanisms, like unsteady fluid drag forces, can obscure the underlying law of gravitation, leading to an erroneous model.
Posted Content

Approximate Message Passing algorithms for rotationally invariant matrices.

TL;DR: It is shown that this Bayes-AMP algorithm for Principal Components Analysis, when there is prior structure for the principal components (PCs) and possibly non-white noise, provably achieves higher estimation accuracy than the sample PCs.
Journal ArticleDOI

Familywise error rate control via knockoffs

TL;DR: In this article, the authors present a novel method for controlling the $k$-familywise error rate in the linear regression setting using the knockoffs framework first introduced by Barber and Candes, which can be applied with any design matrix with at least as many observations as variables and does not require knowing the noise variance.
Journal ArticleDOI

Model selection for hybrid dynamical systems via sparse regression

TL;DR: A new methodology is developed, Hybrid-Sparse Identification of Nonlinear Dynamics, which identifies separate nonlinear dynamical regimes, employs information theory to manage uncertainty and characterizes switching behaviour.
References
More filters
Journal ArticleDOI

Regression Shrinkage and Selection via the Lasso

TL;DR: A new method for estimation in linear models called the lasso, which minimizes the residual sum of squares subject to the sum of the absolute value of the coefficients being less than a constant, is proposed.
Journal ArticleDOI

Regularization and variable selection via the elastic net

TL;DR: It is shown that the elastic net often outperforms the lasso, while enjoying a similar sparsity of representation, and an algorithm called LARS‐EN is proposed for computing elastic net regularization paths efficiently, much like algorithm LARS does for the lamba.
Journal ArticleDOI

Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties

TL;DR: In this article, penalized likelihood approaches are proposed to handle variable selection problems, and it is shown that the newly proposed estimators perform as well as the oracle procedure in variable selection; namely, they work as well if the correct submodel were known.
Journal ArticleDOI

Model selection and estimation in regression with grouped variables

TL;DR: In this paper, instead of selecting factors by stepwise backward elimination, the authors focus on the accuracy of estimation and consider extensions of the lasso, the LARS algorithm and the non-negative garrotte for factor selection.
Related Papers (5)