Least squares after model selection in high-dimensional sparse models
TLDR
In this paper, the post-l1-penalized estimators in high-dimensional linear regression models are used to estimate the probability of a linear regression model to be true.Abstract:
Note: new title. Former title = Post-l1-Penalized Estimators in High-Dimensional Linear Regression Models. First Version submitted March 29, 2010; Orig. date Jan 4, 2009; this revision June 14, 2011read more
Citations
More filters
ReportDOI
Double/debiased machine learning for treatment and structural parameters
Victor Chernozhukov,Denis Chetverikov,Mert Demirer,Esther Duflo,Christian Hansen,Whitney K. Newey,James M. Robins +6 more
TL;DR: In this article, the authors show that the impact of regularization bias and overfitting on estimation of the parameter of interest θ0 can be removed by using two simple, yet critical, ingredients: (1) using Neyman-orthogonal moments/scores that have reduced sensitivity with respect to nuisance parameters, and (2) making use of cross-fitting, which provides an efficient form of data-splitting.
Journal ArticleDOI
Covariate balancing propensity score.
Kosuke Imai,Marc Ratkovic +1 more
TL;DR: Covariate balancing propensity score (CBPS) as mentioned in this paper was proposed to improve the empirical performance of propensity score matching and weighting methods by exploiting the dual characteristics of the propensity score as a covariate balancing score and the conditional probability of treatment assignment.
Journal ArticleDOI
Inference on Treatment Effects after Selection among High-Dimensional Controls
TL;DR: The authors proposed robust methods for inference about the effect of a treatment variable on a scalar outcome in the presence of very many regressors in a model with possibly non-Gaussian and heteroscedastic disturbances.
Journal Article
Confidence intervals and hypothesis testing for high-dimensional regression
Adel Javanmard,Andrea Montanari +1 more
TL;DR: In this paper, a de-biased version of regularized M-estimators is proposed to construct confidence intervals and p-values for high-dimensional linear regression models, and the resulting confidence intervals have nearly optimal size.
Journal ArticleDOI
Sparse models and methods for optimal instruments with an application to eminent domain
TL;DR: In this paper, preliminary results of this paper were presented at Chernozhukov's invited Cowles Foundation lecture at the Northern American meetings of the Econometric society in June of 2009.
References
More filters
Journal ArticleDOI
Regression Shrinkage and Selection via the Lasso
TL;DR: A new method for estimation in linear models called the lasso, which minimizes the residual sum of squares subject to the sum of the absolute value of the coefficients being less than a constant, is proposed.
Journal ArticleDOI
Ideal spatial adaptation by wavelet shrinkage
TL;DR: In this article, the authors developed a spatially adaptive method, RiskShrink, which works by shrinkage of empirical wavelet coefficients, and achieved a performance within a factor log 2 n of the ideal performance of piecewise polynomial and variable-knot spline methods.
Journal ArticleDOI
The Dantzig selector: Statistical estimation when p is much larger than n
Emmanuel J. Candès,Terence Tao +1 more
TL;DR: In many important statistical applications, the number of variables or parameters p is much larger than the total number of observations n as discussed by the authors, and it is possible to estimate β reliably based on the noisy data y.
Book
Introduction to Nonparametric Estimation
TL;DR: The main idea is to introduce the fundamental concepts of the theory while maintaining the exposition suitable for a first approach in the field, and many important and useful results on optimal and adaptive estimation are provided.
Journal Article
On Model Selection Consistency of Lasso
Peng Zhao,Bin Yu +1 more
TL;DR: It is proved that a single condition, which is called the Irrepresentable Condition, is almost necessary and sufficient for Lasso to select the true model both in the classical fixed p setting and in the large p setting as the sample size n gets large.