Journal ArticleDOI
For most large underdetermined systems of equations, the minimal 1-norm near-solution approximates the sparsest near-solution
Reads0
Chats0
TLDR
It is shown that for most Φ, if the optimally sparse approximation x0,ϵ is sufficiently sparse, then the solution x1, ϵ of the 𝓁1‐minimization problem is a good approximation to x0 ,ϵ.Abstract:
We consider inexact linear equations y ≈ Φx where y is a given vector in R n , Φ is a given n x m matrix, and we wish to find x 0,∈ as sparse as possible while obeying ∥y - Φx 0,∈ ∥ 2 ≤ ∈. In general, this requires combinatorial optimization and so is considered intractable. On the other hand, the l 1 -minimization problem min ∥x∥ 1 subject to ∥y - Φx∥ 2 ≤ e is convex and is considered tractable. We show that for most Φ, if the optimally sparse approximation x 0,∈ is sufficiently sparse, then the solution x 1,∈ of the l 1 -minimization problem is a good approximation to x 0,∈ . We suppose that the columns of Φ are normalized to the unit l 2 -norm, and we place uniform measure on such Φ. We study the underdetermined case where m ∼ τn and τ > 1, and prove the existence of p = p(r) > 0 and C = C(p, τ) so that for large n and for all Φ's except a negligible fraction, the following approximate sparse solution property of Φ holds: for every y having an approximation ∥y - Φx 0 ∥ 2 ≤ ∈ by a coefficient vector x 0 e R m with fewer than ρ · n nonzeros, ∥x 1,∈ - x 0 ∥ 2 ≤ C ≤ ∈. This has two implications. First, for most Φ, whenever the combinatorial optimization result x 0,∈ would be very sparse, x 1,∈ is a good approximation to x 0,∈ . Second, suppose we are given noisy data obeying y = Φx 0 + z where the unknown x 0 is known to be sparse and the noise ∥z∥ 2 ≤ ∈. For most Φ, noise-tolerant l 1 -minimization will stably recover x 0 from y in the presence of noise z. We also study the barely determined case m = n and reach parallel conclusions by slightly different arguments. Proof techniques include the use of almost-spherical sections in Banach space theory and concentration of measure for eigenvalues of random matrices.read more
Citations
More filters
Book
Compressed sensing
TL;DR: It is possible to design n=O(Nlog(m)) nonadaptive measurements allowing reconstruction with accuracy comparable to that attainable with direct knowledge of the N most important coefficients, and a good approximation to those N important coefficients is extracted from the n measurements by solving a linear program-Basis Pursuit in signal processing.
Journal ArticleDOI
Robust Face Recognition via Sparse Representation
TL;DR: This work considers the problem of automatically recognizing human faces from frontal views with varying expression and illumination, as well as occlusion and disguise, and proposes a general classification algorithm for (image-based) object recognition based on a sparse representation computed by C1-minimization.
Journal ArticleDOI
$rm K$ -SVD: An Algorithm for Designing Overcomplete Dictionaries for Sparse Representation
TL;DR: A novel algorithm for adapting dictionaries in order to achieve sparse signal representations, the K-SVD algorithm, an iterative method that alternates between sparse coding of the examples based on the current dictionary and a process of updating the dictionary atoms to better fit the data.
Journal ArticleDOI
Image Super-Resolution Via Sparse Representation
TL;DR: This paper presents a new approach to single-image superresolution, based upon sparse signal representation, which generates high-resolution images that are competitive or even superior in quality to images produced by other similar SR methods.
Journal ArticleDOI
The Dantzig selector: Statistical estimation when p is much larger than n
Emmanuel J. Candès,Terence Tao +1 more
TL;DR: In many important statistical applications, the number of variables or parameters p is much larger than the total number of observations n as discussed by the authors, and it is possible to estimate β reliably based on the noisy data y.
References
More filters
Journal ArticleDOI
Atomic Decomposition by Basis Pursuit
TL;DR: Basis Pursuit (BP) is a principle for decomposing a signal into an "optimal" superposition of dictionary elements, where optimal means having the smallest l1 norm of coefficients among all such decompositions.
Journal ArticleDOI
Matching pursuits with time-frequency dictionaries
Stéphane Mallat,Zhifeng Zhang +1 more
TL;DR: The authors introduce an algorithm, called matching pursuit, that decomposes any signal into a linear expansion of waveforms that are selected from a redundant dictionary of functions, chosen in order to best match the signal structures.
Journal ArticleDOI
Least angle regression
Bradley Efron,Trevor Hastie,Iain M. Johnstone,Robert Tibshirani,Hemant Ishwaran,Keith Knight,Jean-Michel Loubes,Jean-Michel Loubes,Pascal Massart,Pascal Massart,David Madigan,David Madigan,Greg Ridgeway,Greg Ridgeway,Saharon Rosset,Saharon Rosset,Ji Zhu,Robert A. Stine,Berwin A. Turlach,Sanford Weisberg +19 more
TL;DR: A publicly available algorithm that requires only the same order of magnitude of computational effort as ordinary least squares applied to the full set of covariates is described.
Journal ArticleDOI
Greed is good: algorithmic results for sparse approximation
TL;DR: This article presents new results on using a greedy algorithm, orthogonal matching pursuit (OMP), to solve the sparse approximation problem over redundant dictionaries and develops a sufficient condition under which OMP can identify atoms from an optimal approximation of a nonsparse signal.