scispace - formally typeset
Search or ask a question

Signal Recovery from Random Measurements Via Orthogonal Matching Pursuit: The Gaussian Case

TL;DR: In this paper, a greedy algorithm called Orthogonal Matching Pursuit (OMP) was proposed to recover a signal with m nonzero entries in dimension 1 given O(m n d) random linear measurements of that signal.
Abstract: This report demonstrates theoretically and empirically that a greedy algorithm called Orthogonal Matching Pursuit (OMP) can reliably recover a signal with m nonzero entries in dimension d given O(mln d) random linear measurements of that signal. This is a massive improvement over previous results, which require O(m2) measurements. The new results for OMP are comparable with recent results for another approach called Basis Pursuit (BP). In some settings, the OMP algorithm is faster and easier to implement, so it is an attractive alternative to BP for signal recovery problems.

Content maybe subject to copyright    Report

Citations
More filters
Posted Content
TL;DR: A mechanism is designed that can handle the analysis of a LASSO type of algorithm that can be used for "solving" noisy under-determined systems and the results are in an absolute agreement with the corresponding exact ones from the noiseless case.
Abstract: In this paper we consider solving \emph{noisy} under-determined systems of linear equations with sparse solutions. A noiseless equivalent attracted enormous attention in recent years, above all, due to work of \cite{CRT,CanRomTao06,DonohoPol} where it was shown in a statistical and large dimensional context that a sparse unknown vector (of sparsity proportional to the length of the vector) can be recovered from an under-determined system via a simple polynomial $\ell_1$-optimization algorithm. \cite{CanRomTao06} further established that even when the equations are \emph{noisy}, one can, through an SOCP noisy equivalent of $\ell_1$, obtain an approximate solution that is (in an $\ell_2$-norm sense) no further than a constant times the noise from the sparse unknown vector. In our recent works \cite{StojnicCSetam09,StojnicUpper10}, we created a powerful mechanism that helped us characterize exactly the performance of $\ell_1$ optimization in the noiseless case (as shown in \cite{StojnicEquiv10} and as it must be if the axioms of mathematics are well set, the results of \cite{StojnicCSetam09,StojnicUpper10} are in an absolute agreement with the corresponding exact ones from \cite{DonohoPol}). In this paper we design a mechanism, as powerful as those from \cite{StojnicCSetam09,StojnicUpper10}, that can handle the analysis of a LASSO type of algorithm (and many others) that can be (or typically are) used for "solving" noisy under-determined systems. Using the mechanism we then, in a statistical context, compute the exact worst-case $\ell_2$ norm distance between the unknown sparse vector and the approximate one obtained through such a LASSO. The obtained results match the corresponding exact ones obtained in \cite{BayMon10,DonMalMon10}. Moreover, as a by-product of our analysis framework we recognize existence of an SOCP type of algorithm that achieves the same performance.

124 citations

Journal ArticleDOI
TL;DR: A fast and simple recovery algorithm that performs the proposed thresholding approach in the discrete cosine transform domain is proposed and results show that the proposed measurement matrix has a better performance in terms of reconstruction quality compared with random matrices.
Abstract: Compressed sensing (CS) is a technique that is suitable for compressing and recovering signals having sparse representations in certain bases. CS has been widely used to optimize the measurement process of bandwidth and power constrained systems like wireless body sensor network. The central issues with CS are the construction of measurement matrix and the development of recovery algorithm. In this paper, we propose a simple deterministic measurement matrix that facilitates the hardware implementation. To control the sparsity level of the signals, we apply a thresholding approach in the discrete cosine transform domain. We propose a fast and simple recovery algorithm that performs the proposed thresholding approach. We validate the proposed method by compressing and recovering electrocardiogram and electromyogram signals. We implement the proposed measurement matrix in a MSP-EXP430G2 LaunchPad development board. The simulation and experimental results show that the proposed measurement matrix has a better performance in terms of reconstruction quality compared with random matrices. Depending on the compression ratio, it improves the signal-to-noise ratio of the reconstructed signals from 6 to 20 dB. The obtained results also confirm that the proposed recovery algorithm is, respectively, 23 and 12 times faster than the orthogonal matching pursuit (OMP) and stagewise OMP algorithms.

124 citations

Journal ArticleDOI
TL;DR: This paper proposes to perform and assess CS reconstruction of channel RF data using the recently introduced wave atoms [1] representation, which exhibit advantageous properties for sparsely representing such oscillatory patterns and shows the superiority of the wave atom representation.

124 citations

Journal ArticleDOI
TL;DR: Inspired by algebraic geometry codes, this work introduces a new deterministic construction via algebraic curves over finite fields, which is a natural generalization of DeVore's construction using polynomials over infinite fields.
Abstract: Compressed sensing is a sampling technique which provides a fundamentally new approach to data acquisition. Comparing with traditional methods, compressed sensing makes full use of sparsity so that a sparse signal can be reconstructed from very few measurements. A central problem in compressed sensing is the construction of sensing matrices. While random sensing matrices have been studied intensively, only a few deterministic constructions are known. Inspired by algebraic geometry codes, we introduce a new deterministic construction via algebraic curves over finite fields, which is a natural generalization of DeVore's construction using polynomials over finite fields. The diversity of algebraic curves provides numerous choices for sensing matrices. By choosing appropriate curves, we are able to construct binary sensing matrices which are superior to Devore's ones. We hope this connection between algebraic geometry and compressed sensing will provide a new point of view and stimulate further research in both areas.

123 citations

Proceedings ArticleDOI
07 Dec 2015
TL;DR: A bimodal tree for clustering, which successfully exploits the antipodal invariance of the coarse-to-high-res mapping of natural image patches and provides scalability to finer partitions of the underlying coarse patch space, is presented.
Abstract: This paper presents a fast, high-performance method for super resolution with external learning. The first contribution leading to the excellent performance is a bimodal tree for clustering, which successfully exploits the antipodal invariance of the coarse-to-high-res mapping of natural image patches and provides scalability to finer partitions of the underlying coarse patch space. During training an ensemble of such bimodal trees is computed, providing different linearizations of the mapping. The second and main contribution is a fast inference algorithm, which selects the most suitable mapping function within the tree ensemble for each patch by adopting a Local Naive Bayes formulation. The experimental validation shows promising scalability properties that reflect the suitability of the proposed model, which may also be generalized to other tasks. The resulting method is beyond one order of magnitude faster and performs objectively and subjectively better than the current state of the art.

123 citations

References
More filters
Book
01 Jan 1983

34,729 citations

Book
D.L. Donoho1
01 Jan 2004
TL;DR: It is possible to design n=O(Nlog(m)) nonadaptive measurements allowing reconstruction with accuracy comparable to that attainable with direct knowledge of the N most important coefficients, and a good approximation to those N important coefficients is extracted from the n measurements by solving a linear program-Basis Pursuit in signal processing.
Abstract: Suppose x is an unknown vector in Ropfm (a digital image or signal); we plan to measure n general linear functionals of x and then reconstruct. If x is known to be compressible by transform coding with a known transform, and we reconstruct via the nonlinear procedure defined here, the number of measurements n can be dramatically smaller than the size m. Thus, certain natural classes of images with m pixels need only n=O(m1/4log5/2(m)) nonadaptive nonpixel samples for faithful recovery, as opposed to the usual m pixel samples. More specifically, suppose x has a sparse representation in some orthonormal basis (e.g., wavelet, Fourier) or tight frame (e.g., curvelet, Gabor)-so the coefficients belong to an lscrp ball for 0

18,609 citations

Journal ArticleDOI
TL;DR: Basis Pursuit (BP) is a principle for decomposing a signal into an "optimal" superposition of dictionary elements, where optimal means having the smallest l1 norm of coefficients among all such decompositions.
Abstract: The time-frequency and time-scale communities have recently developed a large number of overcomplete waveform dictionaries --- stationary wavelets, wavelet packets, cosine packets, chirplets, and warplets, to name a few. Decomposition into overcomplete systems is not unique, and several methods for decomposition have been proposed, including the method of frames (MOF), Matching pursuit (MP), and, for special dictionaries, the best orthogonal basis (BOB). Basis Pursuit (BP) is a principle for decomposing a signal into an "optimal" superposition of dictionary elements, where optimal means having the smallest l1 norm of coefficients among all such decompositions. We give examples exhibiting several advantages over MOF, MP, and BOB, including better sparsity and superresolution. BP has interesting relations to ideas in areas as diverse as ill-posed problems, in abstract harmonic analysis, total variation denoising, and multiscale edge denoising. BP in highly overcomplete dictionaries leads to large-scale optimization problems. With signals of length 8192 and a wavelet packet dictionary, one gets an equivalent linear program of size 8192 by 212,992. Such problems can be attacked successfully only because of recent advances in linear programming by interior-point methods. We obtain reasonable success with a primal-dual logarithmic barrier method and conjugate-gradient solver.

9,950 citations

Journal ArticleDOI
TL;DR: The authors introduce an algorithm, called matching pursuit, that decomposes any signal into a linear expansion of waveforms that are selected from a redundant dictionary of functions, chosen in order to best match the signal structures.
Abstract: The authors introduce an algorithm, called matching pursuit, that decomposes any signal into a linear expansion of waveforms that are selected from a redundant dictionary of functions. These waveforms are chosen in order to best match the signal structures. Matching pursuits are general procedures to compute adaptive signal representations. With a dictionary of Gabor functions a matching pursuit defines an adaptive time-frequency transform. They derive a signal energy distribution in the time-frequency plane, which does not include interference terms, unlike Wigner and Cohen class distributions. A matching pursuit isolates the signal structures that are coherent with respect to a given dictionary. An application to pattern extraction from noisy signals is described. They compare a matching pursuit decomposition with a signal expansion over an optimized wavepacket orthonormal basis, selected with the algorithm of Coifman and Wickerhauser see (IEEE Trans. Informat. Theory, vol. 38, Mar. 1992). >

9,380 citations

Journal ArticleDOI
TL;DR: A publicly available algorithm that requires only the same order of magnitude of computational effort as ordinary least squares applied to the full set of covariates is described.
Abstract: The purpose of model selection algorithms such as All Subsets, Forward Selection and Backward Elimination is to choose a linear model on the basis of the same set of data to which the model will be applied. Typically we have available a large collection of possible covariates from which we hope to select a parsimonious set for the efficient prediction of a response variable. Least Angle Regression (LARS), a new model selection algorithm, is a useful and less greedy version of traditional forward selection methods. Three main properties are derived: (1) A simple modification of the LARS algorithm implements the Lasso, an attractive version of ordinary least squares that constrains the sum of the absolute regression coefficients; the LARS modification calculates all possible Lasso estimates for a given problem, using an order of magnitude less computer time than previous methods. (2) A different LARS modification efficiently implements Forward Stagewise linear regression, another promising new model selection method; this connection explains the similar numerical results previously observed for the Lasso and Stagewise, and helps us understand the properties of both methods, which are seen as constrained versions of the simpler LARS algorithm. (3) A simple approximation for the degrees of freedom of a LARS estimate is available, from which we derive a Cp estimate of prediction error; this allows a principled choice among the range of possible LARS estimates. LARS and its variants are computationally efficient: the paper describes a publicly available algorithm that requires only the same order of magnitude of computational effort as ordinary least squares applied to the full set of covariates.

7,828 citations