scispace - formally typeset
Search or ask a question

Signal Recovery from Random Measurements Via Orthogonal Matching Pursuit: The Gaussian Case

TL;DR: In this paper, a greedy algorithm called Orthogonal Matching Pursuit (OMP) was proposed to recover a signal with m nonzero entries in dimension 1 given O(m n d) random linear measurements of that signal.
Abstract: This report demonstrates theoretically and empirically that a greedy algorithm called Orthogonal Matching Pursuit (OMP) can reliably recover a signal with m nonzero entries in dimension d given O(mln d) random linear measurements of that signal. This is a massive improvement over previous results, which require O(m2) measurements. The new results for OMP are comparable with recent results for another approach called Basis Pursuit (BP). In some settings, the OMP algorithm is faster and easier to implement, so it is an attractive alternative to BP for signal recovery problems.

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
TL;DR: In this paper, the authors used the space variant point-spread functions resulting from the interaction of a point source in the field of view with the image sensor to image simple objects displayed on a discrete LED array as well as on an LCD screen.
Abstract: Photography usually requires optics in conjunction with a recording device (an image sensor). Eliminating the optics could lead to new form factors for cameras. Here, we report a simple demonstration of imaging using a bare CMOS sensor that utilizes computation. The technique relies on the space variant point-spread functions resulting from the interaction of a point source in the field of view with the image sensor. These space-variant point-spread functions are combined with a reconstruction algorithm in order to image simple objects displayed on a discrete LED array as well as on an LCD screen. We extended the approach to video imaging. Finally, we performed experiments to analyze the parametric impact of the object distance. Improving the sensor designs and reconstruction algorithms can lead to useful cameras without optics.

37 citations

Journal ArticleDOI
Y. Liu1, M.Y. Wu1, Shunjun Wu1
TL;DR: A fast OMP algorithm for 2D sparse signals of this kind is presented, and applied to 2D angle estimation in MIMO radar and results verify its good reconstruction quality approximate to that of OMP and greatly improved computational efficiency.
Abstract: A high-dimensional sparse signal usually should be realigned as a long 1D signal to be recovered by orthogonal matching pursuit (OMP), an efficient algorithm for compressed sensing. Clearly, however, the realigned long signal will result in a large amount of computation in OMP. If each atom in the dictionary can be expressed as the Kronecker product of two vectors, it can possible to decompose this dictionary into two sub-dictionaries. By exploiting this property, a fast OMP algorithm for 2D sparse signals of this kind is presented, and applied to 2D angle estimation in MIMO radar. Simulation results verify its good reconstruction quality approximate to that of OMP and greatly improved computational efficiency.

37 citations

Journal ArticleDOI
TL;DR: This paper evaluates the compressed sensing paradigm impact in a cardiac monitoring WSN, discussing the implications in data reliability, energy management, and the improvements accomplished by in-network processing.
Abstract: Mobile solutions for patient cardiac monitoring are viewed with growing interest, and improvements on current implementations are frequently reported, with wireless, and in particular, wearable devices promising to achieve ubiquity. However, due to unavoidable power consumption limitations, the amount of data acquired, processed, and transmitted needs to be diminished, which is counterproductive, regarding the quality of the information produced. Compressed sensing implementation in wireless sensor networks (WSNs) promises to bring gains not only in power savings to the devices, but also with minor impact in signal quality. Several cardiac signals have a sparse representation in some wavelet transformations. The compressed sensing paradigm states that signals can be recovered from a few projections into another basis, incoherent with the first. This paper evaluates the compressed sensing paradigm impact in a cardiac monitoring WSN, discussing the implications in data reliability, energy management, and the improvements accomplished by in-network processing.

37 citations

Journal ArticleDOI
TL;DR: The recovery performance of greedy algorithms using the proposed MMCA outperforms the random algorithm and the algorithms introduced by Elad, Vahid, Hang, and Xu and shows the superiority of MMCA for real temperature data reconstruction comparing with other existing measurement matrix optimization algorithms.
Abstract: A simple but efficient measurement matrix construction algorithm (MMCA) within compressive sensing (CS) framework is introduced. In the CS framework, the smaller coherence between the measurement matrix Φ and the sparse matrix (basis) Ψ can lead to better signal reconstruction performance. In this paper, we achieve this purpose by adopting shrinkage and alternating projection technique iteratively. Finally, the coherence among the columns of the optimized measurement matrix Φ and the fixed sparse matrix Ψ can be decreased greatly, even close to the Welch bound. The extensive experiments have been conducted to test the performance of the proposed algorithm, which are compared with that of the state-of-the-art algorithms. We conclude that the recovery performance of greedy algorithms [e.g., orthogonal matching pursuit (OMP) and regularized OMP] using the proposed MMCA outperforms the random algorithm and the algorithms introduced by Elad, Vahid, Hang, and Xu. In addition, the real temperature data gathering and reconstruction in wireless sensor networks have been conducted. The experimental results also show the superiority of MMCA for real temperature data reconstruction comparing with other existing measurement matrix optimization algorithms.

37 citations

Posted Content
TL;DR: The result is that strictly-sparse signals can be reconstructed efficiently with high-probability using a constant oversampling ratio (i.e., when the number of measurements scales linearly with the sparsity of the signal).
Abstract: This paper considers the performance of $(j,k)$-regular low-density parity-check (LDPC) codes with message-passing (MP) decoding algorithms in the high-rate regime. In particular, we derive the high-rate scaling law for MP decoding of LDPC codes on the binary erasure channel (BEC) and the $q$-ary symmetric channel ($q$-SC). For the BEC, the density evolution (DE) threshold of iterative decoding scales like $\Theta(k^{-1})$ and the critical stopping ratio scales like $\Theta(k^{-j/(j-2)})$. For the $q$-SC, the DE threshold of verification decoding depends on the details of the decoder and scales like $\Theta(k^{-1})$ for one decoder. Using the fact that coding over large finite alphabets is very similar to coding over the real numbers, the analysis of verification decoding is also extended to the the compressed sensing (CS) of strictly-sparse signals. A DE based approach is used to analyze the CS systems with randomized-reconstruction guarantees. This leads to the result that strictly-sparse signals can be reconstructed efficiently with high-probability using a constant oversampling ratio (i.e., when the number of measurements scales linearly with the sparsity of the signal). A stopping-set based approach is also used to get stronger (e.g., uniform-in-probability) reconstruction guarantees.

37 citations

References
More filters
Book
01 Jan 1983

34,729 citations

Book
D.L. Donoho1
01 Jan 2004
TL;DR: It is possible to design n=O(Nlog(m)) nonadaptive measurements allowing reconstruction with accuracy comparable to that attainable with direct knowledge of the N most important coefficients, and a good approximation to those N important coefficients is extracted from the n measurements by solving a linear program-Basis Pursuit in signal processing.
Abstract: Suppose x is an unknown vector in Ropfm (a digital image or signal); we plan to measure n general linear functionals of x and then reconstruct. If x is known to be compressible by transform coding with a known transform, and we reconstruct via the nonlinear procedure defined here, the number of measurements n can be dramatically smaller than the size m. Thus, certain natural classes of images with m pixels need only n=O(m1/4log5/2(m)) nonadaptive nonpixel samples for faithful recovery, as opposed to the usual m pixel samples. More specifically, suppose x has a sparse representation in some orthonormal basis (e.g., wavelet, Fourier) or tight frame (e.g., curvelet, Gabor)-so the coefficients belong to an lscrp ball for 0

18,609 citations

Journal ArticleDOI
TL;DR: Basis Pursuit (BP) is a principle for decomposing a signal into an "optimal" superposition of dictionary elements, where optimal means having the smallest l1 norm of coefficients among all such decompositions.
Abstract: The time-frequency and time-scale communities have recently developed a large number of overcomplete waveform dictionaries --- stationary wavelets, wavelet packets, cosine packets, chirplets, and warplets, to name a few. Decomposition into overcomplete systems is not unique, and several methods for decomposition have been proposed, including the method of frames (MOF), Matching pursuit (MP), and, for special dictionaries, the best orthogonal basis (BOB). Basis Pursuit (BP) is a principle for decomposing a signal into an "optimal" superposition of dictionary elements, where optimal means having the smallest l1 norm of coefficients among all such decompositions. We give examples exhibiting several advantages over MOF, MP, and BOB, including better sparsity and superresolution. BP has interesting relations to ideas in areas as diverse as ill-posed problems, in abstract harmonic analysis, total variation denoising, and multiscale edge denoising. BP in highly overcomplete dictionaries leads to large-scale optimization problems. With signals of length 8192 and a wavelet packet dictionary, one gets an equivalent linear program of size 8192 by 212,992. Such problems can be attacked successfully only because of recent advances in linear programming by interior-point methods. We obtain reasonable success with a primal-dual logarithmic barrier method and conjugate-gradient solver.

9,950 citations

Journal ArticleDOI
TL;DR: The authors introduce an algorithm, called matching pursuit, that decomposes any signal into a linear expansion of waveforms that are selected from a redundant dictionary of functions, chosen in order to best match the signal structures.
Abstract: The authors introduce an algorithm, called matching pursuit, that decomposes any signal into a linear expansion of waveforms that are selected from a redundant dictionary of functions. These waveforms are chosen in order to best match the signal structures. Matching pursuits are general procedures to compute adaptive signal representations. With a dictionary of Gabor functions a matching pursuit defines an adaptive time-frequency transform. They derive a signal energy distribution in the time-frequency plane, which does not include interference terms, unlike Wigner and Cohen class distributions. A matching pursuit isolates the signal structures that are coherent with respect to a given dictionary. An application to pattern extraction from noisy signals is described. They compare a matching pursuit decomposition with a signal expansion over an optimized wavepacket orthonormal basis, selected with the algorithm of Coifman and Wickerhauser see (IEEE Trans. Informat. Theory, vol. 38, Mar. 1992). >

9,380 citations

Journal ArticleDOI
TL;DR: A publicly available algorithm that requires only the same order of magnitude of computational effort as ordinary least squares applied to the full set of covariates is described.
Abstract: The purpose of model selection algorithms such as All Subsets, Forward Selection and Backward Elimination is to choose a linear model on the basis of the same set of data to which the model will be applied. Typically we have available a large collection of possible covariates from which we hope to select a parsimonious set for the efficient prediction of a response variable. Least Angle Regression (LARS), a new model selection algorithm, is a useful and less greedy version of traditional forward selection methods. Three main properties are derived: (1) A simple modification of the LARS algorithm implements the Lasso, an attractive version of ordinary least squares that constrains the sum of the absolute regression coefficients; the LARS modification calculates all possible Lasso estimates for a given problem, using an order of magnitude less computer time than previous methods. (2) A different LARS modification efficiently implements Forward Stagewise linear regression, another promising new model selection method; this connection explains the similar numerical results previously observed for the Lasso and Stagewise, and helps us understand the properties of both methods, which are seen as constrained versions of the simpler LARS algorithm. (3) A simple approximation for the degrees of freedom of a LARS estimate is available, from which we derive a Cp estimate of prediction error; this allows a principled choice among the range of possible LARS estimates. LARS and its variants are computationally efficient: the paper describes a publicly available algorithm that requires only the same order of magnitude of computational effort as ordinary least squares applied to the full set of covariates.

7,828 citations