scispace - formally typeset
Search or ask a question

Signal Recovery from Random Measurements Via Orthogonal Matching Pursuit: The Gaussian Case

TL;DR: In this paper, a greedy algorithm called Orthogonal Matching Pursuit (OMP) was proposed to recover a signal with m nonzero entries in dimension 1 given O(m n d) random linear measurements of that signal.
Abstract: This report demonstrates theoretically and empirically that a greedy algorithm called Orthogonal Matching Pursuit (OMP) can reliably recover a signal with m nonzero entries in dimension d given O(mln d) random linear measurements of that signal. This is a massive improvement over previous results, which require O(m2) measurements. The new results for OMP are comparable with recent results for another approach called Basis Pursuit (BP). In some settings, the OMP algorithm is faster and easier to implement, so it is an attractive alternative to BP for signal recovery problems.

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
Rachel Ward1
TL;DR: In this paper, it was shown that the sparsity of the signal x is unknown and that the quality of a CS estimate x using m measurements is not assured, however, sharp bounds on the error x - \mathhat xj ||lN2 can be achieved with almost no effort.
Abstract: Compressed sensing (CS) decoding algorithms can efficiently recover an N -dimensional real-valued vector x to within a factor of its best k-term approximation by taking m = O(klogN/k) measurements y = Phix. If the sparsity or approximate sparsity level of x were known, then this theoretical guarantee would imply quality assurance of the resulting CS estimate. However, because the underlying sparsity of the signal x is unknown, the quality of a CS estimate \mathhat x using m measurements is not assured. It is nevertheless shown in this paper that sharp bounds on the error ||x - \mathhat x ||lN2 can be achieved with almost no effort. More precisely, suppose that a maximum number of measurements m is preimposed. One can reserve 10 log p of these m measurements and compute a sequence of possible estimates (\mathhat xj)j=1p to x from the m -10logp remaining measurements; the errors ||x - \mathhat xj ||lN2 for j = 1, ..., p can then be bounded with high probability. As a consequence, numerical upper and lower bounds on the error between x and the best k-term approximation to x can be estimated for p values of k with almost no cost. This observation has applications outside CS as well.

164 citations

Proceedings Article
01 Aug 2008
TL;DR: It is shown that the proposed approximation framework can successfully determine multiple target locations by using linear dimensionality-reducing projections of sensor measurements, ameliorating the communication requirements.
Abstract: We propose an approximation framework for distributed target localization in sensor networks. We represent the unknown target positions on a location grid as a sparse vector, whose support encodes the multiple target locations. The location vector is linearly related to multiple sensor measurements through a sensing matrix, which can be locally estimated at each sensor. We show that we can successfully determine multiple target locations by using linear dimensionality-reducing projections of sensor measurements. The overall communication bandwidth requirement per sensor is logarithmic in the number of grid points and linear in the number of targets, ameliorating the communication requirements. Simulations results demonstrate the performance of the proposed framework.

163 citations

Journal ArticleDOI
TL;DR: A discriminative dictionary learning algorithm, called the locality-constrained and label embedding dictionary learning (LCLE-DL) algorithm, was proposed for image classification, which can achieve better performance than some state-of-the-art algorithms.
Abstract: Locality and label information of training samples play an important role in image classification. However, previous dictionary learning algorithms do not take the locality and label information of atoms into account together in the learning process, and thus their performance is limited. In this paper, a discriminative dictionary learning algorithm, called the locality-constrained and label embedding dictionary learning (LCLE-DL) algorithm, was proposed for image classification. First, the locality information was preserved using the graph Laplacian matrix of the learned dictionary instead of the conventional one derived from the training samples. Then, the label embedding term was constructed using the label information of atoms instead of the classification error term, which contained discriminating information of the learned dictionary. The optimal coding coefficients derived by the locality-based and label-based reconstruction were effective for image classification. Experimental results demonstrated that the LCLE-DL algorithm can achieve better performance than some state-of-the-art algorithms.

163 citations

Journal ArticleDOI
TL;DR: In this paper, the problem of finding the candidate that minimizes the residual is modeled as a combinatoric tree search problem and the greedy search strategy is a good fit for solving this problem.
Abstract: In this paper, we propose an algorithm referred to as multipath matching pursuit (MMP) that investigates multiple promising candidates to recover sparse signals from compressed measurements. Our method is inspired by the fact that the problem to find the candidate that minimizes the residual is readily modeled as a combinatoric tree search problem and the greedy search strategy is a good fit for solving this problem. In the empirical results as well as the restricted isometry property-based performance guarantee, we show that the proposed MMP algorithm is effective in reconstructing original sparse signals for both noiseless and noisy scenarios.

162 citations

Proceedings ArticleDOI
12 Nov 2007
TL;DR: Overall, the conventional parametric modeling used in CS is replaced by a nonparametric one and it is shown that the algorithm allows to achieve exact reconstruction of synthetic phantom data even from a very small number projections.
Abstract: We introduce a new approach to image reconstruction from highly incomplete data. The available data are assumed to be a small collection of spectral coefficients of an arbitrary linear transform. This reconstruction problem is the subject of intensive study in the recent field of "compressed sensing" (also known as "compressive sampling"). Our approach is based on a quite specific recursive filtering procedure. At every iteration the algorithm is excited by injection of random noise in the unobserved portion of the spectrum and a spatially adaptive image denoising filter, working in the image domain, is exploited to attenuate the noise and reveal new features and details out of the incomplete and degraded observations. This recursive algorithm can be interpreted as a special type of the Robbins-Monro stochastic approximation procedure with regularization enabled by a spatially adaptive filter. Overall, we replace the conventional parametric modeling used in CS by a nonparametric one. We illustrate the effectiveness of the proposed approach for two important inverse problems from computerized tomography: Radon inversion from sparse projections and limited-angle tomography. In particular we show that the algorithm allows to achieve exact reconstruction of synthetic phantom data even from a very small number projections. The accuracy of our reconstruction is in line with the best results in the compressed sensing field.

161 citations

References
More filters
Book
01 Jan 1983

34,729 citations

Book
D.L. Donoho1
01 Jan 2004
TL;DR: It is possible to design n=O(Nlog(m)) nonadaptive measurements allowing reconstruction with accuracy comparable to that attainable with direct knowledge of the N most important coefficients, and a good approximation to those N important coefficients is extracted from the n measurements by solving a linear program-Basis Pursuit in signal processing.
Abstract: Suppose x is an unknown vector in Ropfm (a digital image or signal); we plan to measure n general linear functionals of x and then reconstruct. If x is known to be compressible by transform coding with a known transform, and we reconstruct via the nonlinear procedure defined here, the number of measurements n can be dramatically smaller than the size m. Thus, certain natural classes of images with m pixels need only n=O(m1/4log5/2(m)) nonadaptive nonpixel samples for faithful recovery, as opposed to the usual m pixel samples. More specifically, suppose x has a sparse representation in some orthonormal basis (e.g., wavelet, Fourier) or tight frame (e.g., curvelet, Gabor)-so the coefficients belong to an lscrp ball for 0

18,609 citations

Journal ArticleDOI
TL;DR: Basis Pursuit (BP) is a principle for decomposing a signal into an "optimal" superposition of dictionary elements, where optimal means having the smallest l1 norm of coefficients among all such decompositions.
Abstract: The time-frequency and time-scale communities have recently developed a large number of overcomplete waveform dictionaries --- stationary wavelets, wavelet packets, cosine packets, chirplets, and warplets, to name a few. Decomposition into overcomplete systems is not unique, and several methods for decomposition have been proposed, including the method of frames (MOF), Matching pursuit (MP), and, for special dictionaries, the best orthogonal basis (BOB). Basis Pursuit (BP) is a principle for decomposing a signal into an "optimal" superposition of dictionary elements, where optimal means having the smallest l1 norm of coefficients among all such decompositions. We give examples exhibiting several advantages over MOF, MP, and BOB, including better sparsity and superresolution. BP has interesting relations to ideas in areas as diverse as ill-posed problems, in abstract harmonic analysis, total variation denoising, and multiscale edge denoising. BP in highly overcomplete dictionaries leads to large-scale optimization problems. With signals of length 8192 and a wavelet packet dictionary, one gets an equivalent linear program of size 8192 by 212,992. Such problems can be attacked successfully only because of recent advances in linear programming by interior-point methods. We obtain reasonable success with a primal-dual logarithmic barrier method and conjugate-gradient solver.

9,950 citations

Journal ArticleDOI
TL;DR: The authors introduce an algorithm, called matching pursuit, that decomposes any signal into a linear expansion of waveforms that are selected from a redundant dictionary of functions, chosen in order to best match the signal structures.
Abstract: The authors introduce an algorithm, called matching pursuit, that decomposes any signal into a linear expansion of waveforms that are selected from a redundant dictionary of functions. These waveforms are chosen in order to best match the signal structures. Matching pursuits are general procedures to compute adaptive signal representations. With a dictionary of Gabor functions a matching pursuit defines an adaptive time-frequency transform. They derive a signal energy distribution in the time-frequency plane, which does not include interference terms, unlike Wigner and Cohen class distributions. A matching pursuit isolates the signal structures that are coherent with respect to a given dictionary. An application to pattern extraction from noisy signals is described. They compare a matching pursuit decomposition with a signal expansion over an optimized wavepacket orthonormal basis, selected with the algorithm of Coifman and Wickerhauser see (IEEE Trans. Informat. Theory, vol. 38, Mar. 1992). >

9,380 citations

Journal ArticleDOI
TL;DR: A publicly available algorithm that requires only the same order of magnitude of computational effort as ordinary least squares applied to the full set of covariates is described.
Abstract: The purpose of model selection algorithms such as All Subsets, Forward Selection and Backward Elimination is to choose a linear model on the basis of the same set of data to which the model will be applied. Typically we have available a large collection of possible covariates from which we hope to select a parsimonious set for the efficient prediction of a response variable. Least Angle Regression (LARS), a new model selection algorithm, is a useful and less greedy version of traditional forward selection methods. Three main properties are derived: (1) A simple modification of the LARS algorithm implements the Lasso, an attractive version of ordinary least squares that constrains the sum of the absolute regression coefficients; the LARS modification calculates all possible Lasso estimates for a given problem, using an order of magnitude less computer time than previous methods. (2) A different LARS modification efficiently implements Forward Stagewise linear regression, another promising new model selection method; this connection explains the similar numerical results previously observed for the Lasso and Stagewise, and helps us understand the properties of both methods, which are seen as constrained versions of the simpler LARS algorithm. (3) A simple approximation for the degrees of freedom of a LARS estimate is available, from which we derive a Cp estimate of prediction error; this allows a principled choice among the range of possible LARS estimates. LARS and its variants are computationally efficient: the paper describes a publicly available algorithm that requires only the same order of magnitude of computational effort as ordinary least squares applied to the full set of covariates.

7,828 citations