scispace - formally typeset
Open AccessProceedings Article

Lower bounds on the performance of polynomial-time algorithms for sparse linear regression

TLDR
This work shows that when the design matrix is ill-conditioned, the minimax prediction loss achievable by polynomial-time algorithms can be substantially greater than that of an optimal algorithm.
Abstract
Under a standard assumption in complexity theory (NP 6⊂P/poly), we demonstrate a gap between the minimax prediction risk for sparse linear regression that can be achieved by polynomial-time algorithms, and that achieved by optimal algorithms. In particular, when the design matrix is ill-conditioned, the minimax prediction loss achievable by polynomial-time algorithms can be substantially greater than that of an optimal algorithm. This result is the first known gap between polynomial and optimal algorithms for sparse linear regression, and does not depend on conjectures in average-case complexity.

read more

Content maybe subject to copyright    Report

Citations
More filters
Book

High-Dimensional Statistics: A Non-Asymptotic Viewpoint

TL;DR: This book provides a self-contained introduction to the area of high-dimensional statistics, aimed at the first-year graduate level, and includes chapters that are focused on core methodology and theory - including tail bounds, concentration inequalities, uniform laws and empirical process, and random matrices.
Book

Sparse Modeling for Image and Vision Processing

TL;DR: The goal of this monograph is to offer a self-contained view of sparse modeling for visual recognition and image processing, focusing on applications where the dictionary is learned and adapted to data, yielding a compact representation that has been successful in various contexts.
Posted Content

On Iterative Hard Thresholding Methods for High-dimensional M-Estimation

TL;DR: This work provides the first analysis for IHT-style methods in the high dimensional statistical setting with bounds that match known minimax lower bounds and extends the analysis to the problem of low-rank matrix recovery.
Posted Content

Provable Meta-Learning of Linear Representations

TL;DR: This paper provides provably fast, sample-efficient algorithms to address the dual challenges of learning a common set of features from multiple, related tasks and transferring this knowledge to new, unseen tasks, which are central to the general problem of meta-learning.
Journal ArticleDOI

OR Forum—An Algorithmic Approach to Linear Regression

TL;DR: This work presents an MIQO-based approach for designing high quality linear regression models that explicitly addresses various competing objectives and demonstrates the effectiveness of the approach on both real and synthetic data sets.
References
More filters
Journal ArticleDOI

Regression Shrinkage and Selection via the Lasso

TL;DR: A new method for estimation in linear models called the lasso, which minimizes the residual sum of squares subject to the sum of the absolute value of the coefficients being less than a constant, is proposed.
Journal ArticleDOI

Atomic Decomposition by Basis Pursuit

TL;DR: Basis Pursuit (BP) is a principle for decomposing a signal into an "optimal" superposition of dictionary elements, where optimal means having the smallest l1 norm of coefficients among all such decompositions.
Journal ArticleDOI

The Dantzig selector: Statistical estimation when p is much larger than n

TL;DR: In many important statistical applications, the number of variables or parameters p is much larger than the total number of observations n as discussed by the authors, and it is possible to estimate β reliably based on the noisy data y.
MonographDOI

Computational Complexity: A Modern Approach

TL;DR: This beginning graduate textbook describes both recent achievements and classical results of computational complexity theory and can be used as a reference for self-study for anyone interested in complexity.
Journal ArticleDOI

Sparse Approximate Solutions to Linear Systems

TL;DR: It is shown that the problem is NP-hard, but that the well-known greedy heuristic is good in that it computes a solution with at most at most $\left\lceil 18 \mbox{ Opt} ({\bf \epsilon}/2) \|{\bf A}^+\|^2_2 \ln(\|b\|_2/{\bf
Related Papers (5)