scispace - formally typeset
Search or ask a question

Signal Recovery from Random Measurements Via Orthogonal Matching Pursuit: The Gaussian Case

TL;DR: In this paper, a greedy algorithm called Orthogonal Matching Pursuit (OMP) was proposed to recover a signal with m nonzero entries in dimension 1 given O(m n d) random linear measurements of that signal.
Abstract: This report demonstrates theoretically and empirically that a greedy algorithm called Orthogonal Matching Pursuit (OMP) can reliably recover a signal with m nonzero entries in dimension d given O(mln d) random linear measurements of that signal. This is a massive improvement over previous results, which require O(m2) measurements. The new results for OMP are comparable with recent results for another approach called Basis Pursuit (BP). In some settings, the OMP algorithm is faster and easier to implement, so it is an attractive alternative to BP for signal recovery problems.

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
TL;DR: The aim is to provide a detailed analysis of the current trends in CS, focusing on the advantages and disadvantages in compressing different biosignals and its suitability for deployment in embedded hardware.
Abstract: This paper provides a comprehensive review of compressed sensing or compressive sampling (CS) in bioelectric signal compression applications. The aim is to provide a detailed analysis of the current trends in CS, focusing on the advantages and disadvantages in compressing different biosignals and its suitability for deployment in embedded hardware. Performance metrics such as percent root-mean-squared difference (PRD), signal-to-noise ratio (SNR), and power consumption are used to objectively quantify the capabilities of CS. Furthermore, CS is compared to state-of-the-art compression algorithms in compressing electrocardiogram (ECG) and electroencephalography (EEG) as examples of typical biosignals. The main technical challenges associated with CS are discussed along with the predicted future trends.

153 citations

Journal ArticleDOI
Yin Zhang1
TL;DR: The purpose is to introduce an elementary and RIP-free treatment of the basic CS theory, to extend the current recoverability and stability results, and to substantiate a property called uniform recoverability of ℓ1-minimization, that is, for almost all random measurement matrices recoverability is asymptotically identical.
Abstract: Compressive sensing (CS) is an emerging methodology in computational signal processing that has recently attracted intensive research activities. At present, the basic CS theory includes recoverability and stability: the former quantifies the central fact that a sparse signal of length n can be exactly recovered from far fewer than n measurements via l 1-minimization or other recovery techniques, while the latter specifies the stability of a recovery technique in the presence of measurement errors and inexact sparsity. So far, most analyses in CS rely heavily on the Restricted Isometry Property (RIP) for matrices. In this paper, we present an alternative, non-RIP analysis for CS via l 1-minimization. Our purpose is three-fold: (a) to introduce an elementary and RIP-free treatment of the basic CS theory; (b) to extend the current recoverability and stability results so that prior knowledge can be utilized to enhance recovery via l 1-minimization; and (c) to substantiate a property called uniform recoverability of l 1-minimization; that is, for almost all random measurement matrices recoverability is asymptotically identical. With the aid of two classic results, the non-RIP approach enables us to quickly derive from scratch all basic results for the extended theory.

153 citations

ReportDOI
01 Jun 2008
TL;DR: A framework for learning optimal dictionaries for simultaneous sparse signal representation and robust class classification is introduced, addressing for the first time the explicit incorporation of both reconstruction and discrimination terms in the non-parametric dictionary learning and sparse coding energy.
Abstract: : A framework for learning optimal dictionaries for simultaneous sparse signal representation and robust class classification is introduced in this paper This problem for dictionary learning is solved by a class-dependent supervised simultaneous orthogonal matching pursuit, which learns the intra-class structure while increasing the inter-class discrimination, interleaved with an efficient dictionary update obtained via singular value decomposition This framework addresses for the first time the explicit incorporation of both reconstruction and discrimination terms in the non-parametric dictionary learning and sparse coding energy The work contributes to the understanding of the importance of learned sparse representations for signal classification, showing the relevance of learning discriminative and at the same time reconstructive dictionaries in order to achieve accurate and robust classification The presentation of the underlying theory is complemented with examples with the standard MNIST and Caltech datasets, and results on the use of the sparse representation obtained from the learned dictionaries as local patch descriptors, replacing commonly used experimental ones

153 citations

Journal ArticleDOI
TL;DR: This piece of work has proposed a scheme that brings the required number of measurements for OMP closer to BP, and extended the idea of OMPα to illustrate another recovery scheme called OMP∞, which runs OMP until the signal residue vanishes.
Abstract: Orthogonal Matching Pursuit (OMP) and Basis Pursuit (BP) are two well-known recovery algorithms in compressed sensing. To recover a $d$ -dimensional $m$ -sparse signal with high probability, OMP needs $O(m\ln{d})$ number of measurements, whereas BP needs only $O\left(m\ln{{d\over m}}\right)$ number of measurements. In contrary, OMP is a practically more appealing algorithm due to its superior execution speed. In this piece of work, we have proposed a scheme that brings the required number of measurements for OMP closer to BP. We have termed this scheme as ${\rm OMP}_{\alpha}$ , which runs OMP for $(m+\lfloor\alpha{m}\rfloor)$ -iterations instead of $m$ -iterations, by choosing a value of $\alpha\in[0,1]$ . It is shown that ${\rm OMP}_{\alpha}$ guarantees a high probability signal recovery with $O\left(m\ln{{d\over\lfloor\alpha{m}\rfloor+1}}\right)$ number of measurements. Another limitation of OMP unlike BP is that it requires the knowledge of $m$ . In order to overcome this limitation, we have extended the idea of ${\rm OMP}_{\alpha}$ to illustrate another recovery scheme called ${\rm OMP}_{\infty}$ , which runs OMP until the signal residue vanishes. It is shown that ${\rm OMP}_{\infty}$ can achieve a close to $\ell_{0}$ -norm recovery without any knowledge of $m$ like BP.

152 citations

Journal ArticleDOI
TL;DR: A new framework for image compressive sensing recovery via collaborative sparsity is proposed, which enforces local 2-D sparsity and nonlocal 3-Dsparsity simultaneously in an adaptive hybrid space-transform domain, thus substantially utilizing intrinsic sparsity of natural images and greatly confining the CS solution space.
Abstract: Compressive sensing (CS) has drawn quite an amount of attention as a joint sampling and compression approach. Its theory shows that when the signal is sparse enough in some domain, it can be decoded from many fewer measurements than suggested by the Nyquist sampling theory. So one of the most challenging researches in CS is to seek a domain where a signal can exhibit a high degree of sparsity and hence be recovered faithfully. Most of the conventional CS recovery approaches, however, exploited a set of fixed bases (e.g., DCT, wavelet, and gradient domain) for the entirety of a signal, which are irrespective of the nonstationarity of natural signals and cannot achieve high enough degree of sparsity, thus resulting in poor rate-distortion performance. In this paper, we propose a new framework for image compressive sensing recovery via collaborative sparsity, which enforces local 2-D sparsity and nonlocal 3-D sparsity simultaneously in an adaptive hybrid space-transform domain, thus substantially utilizing intrinsic sparsity of natural images and greatly confining the CS solution space. In addition, an efficient augmented Lagrangian-based technique is developed to solve the above optimization problem. Experimental results on a wide range of natural images are presented to demonstrate the efficacy of the new CS recovery strategy.

151 citations

References
More filters
Book
01 Jan 1983

34,729 citations

Book
D.L. Donoho1
01 Jan 2004
TL;DR: It is possible to design n=O(Nlog(m)) nonadaptive measurements allowing reconstruction with accuracy comparable to that attainable with direct knowledge of the N most important coefficients, and a good approximation to those N important coefficients is extracted from the n measurements by solving a linear program-Basis Pursuit in signal processing.
Abstract: Suppose x is an unknown vector in Ropfm (a digital image or signal); we plan to measure n general linear functionals of x and then reconstruct. If x is known to be compressible by transform coding with a known transform, and we reconstruct via the nonlinear procedure defined here, the number of measurements n can be dramatically smaller than the size m. Thus, certain natural classes of images with m pixels need only n=O(m1/4log5/2(m)) nonadaptive nonpixel samples for faithful recovery, as opposed to the usual m pixel samples. More specifically, suppose x has a sparse representation in some orthonormal basis (e.g., wavelet, Fourier) or tight frame (e.g., curvelet, Gabor)-so the coefficients belong to an lscrp ball for 0

18,609 citations

Journal ArticleDOI
TL;DR: Basis Pursuit (BP) is a principle for decomposing a signal into an "optimal" superposition of dictionary elements, where optimal means having the smallest l1 norm of coefficients among all such decompositions.
Abstract: The time-frequency and time-scale communities have recently developed a large number of overcomplete waveform dictionaries --- stationary wavelets, wavelet packets, cosine packets, chirplets, and warplets, to name a few. Decomposition into overcomplete systems is not unique, and several methods for decomposition have been proposed, including the method of frames (MOF), Matching pursuit (MP), and, for special dictionaries, the best orthogonal basis (BOB). Basis Pursuit (BP) is a principle for decomposing a signal into an "optimal" superposition of dictionary elements, where optimal means having the smallest l1 norm of coefficients among all such decompositions. We give examples exhibiting several advantages over MOF, MP, and BOB, including better sparsity and superresolution. BP has interesting relations to ideas in areas as diverse as ill-posed problems, in abstract harmonic analysis, total variation denoising, and multiscale edge denoising. BP in highly overcomplete dictionaries leads to large-scale optimization problems. With signals of length 8192 and a wavelet packet dictionary, one gets an equivalent linear program of size 8192 by 212,992. Such problems can be attacked successfully only because of recent advances in linear programming by interior-point methods. We obtain reasonable success with a primal-dual logarithmic barrier method and conjugate-gradient solver.

9,950 citations

Journal ArticleDOI
TL;DR: The authors introduce an algorithm, called matching pursuit, that decomposes any signal into a linear expansion of waveforms that are selected from a redundant dictionary of functions, chosen in order to best match the signal structures.
Abstract: The authors introduce an algorithm, called matching pursuit, that decomposes any signal into a linear expansion of waveforms that are selected from a redundant dictionary of functions. These waveforms are chosen in order to best match the signal structures. Matching pursuits are general procedures to compute adaptive signal representations. With a dictionary of Gabor functions a matching pursuit defines an adaptive time-frequency transform. They derive a signal energy distribution in the time-frequency plane, which does not include interference terms, unlike Wigner and Cohen class distributions. A matching pursuit isolates the signal structures that are coherent with respect to a given dictionary. An application to pattern extraction from noisy signals is described. They compare a matching pursuit decomposition with a signal expansion over an optimized wavepacket orthonormal basis, selected with the algorithm of Coifman and Wickerhauser see (IEEE Trans. Informat. Theory, vol. 38, Mar. 1992). >

9,380 citations

Journal ArticleDOI
TL;DR: A publicly available algorithm that requires only the same order of magnitude of computational effort as ordinary least squares applied to the full set of covariates is described.
Abstract: The purpose of model selection algorithms such as All Subsets, Forward Selection and Backward Elimination is to choose a linear model on the basis of the same set of data to which the model will be applied. Typically we have available a large collection of possible covariates from which we hope to select a parsimonious set for the efficient prediction of a response variable. Least Angle Regression (LARS), a new model selection algorithm, is a useful and less greedy version of traditional forward selection methods. Three main properties are derived: (1) A simple modification of the LARS algorithm implements the Lasso, an attractive version of ordinary least squares that constrains the sum of the absolute regression coefficients; the LARS modification calculates all possible Lasso estimates for a given problem, using an order of magnitude less computer time than previous methods. (2) A different LARS modification efficiently implements Forward Stagewise linear regression, another promising new model selection method; this connection explains the similar numerical results previously observed for the Lasso and Stagewise, and helps us understand the properties of both methods, which are seen as constrained versions of the simpler LARS algorithm. (3) A simple approximation for the degrees of freedom of a LARS estimate is available, from which we derive a Cp estimate of prediction error; this allows a principled choice among the range of possible LARS estimates. LARS and its variants are computationally efficient: the paper describes a publicly available algorithm that requires only the same order of magnitude of computational effort as ordinary least squares applied to the full set of covariates.

7,828 citations