scispace - formally typeset
Search or ask a question

Signal Recovery from Random Measurements Via Orthogonal Matching Pursuit: The Gaussian Case

TL;DR: In this paper, a greedy algorithm called Orthogonal Matching Pursuit (OMP) was proposed to recover a signal with m nonzero entries in dimension 1 given O(m n d) random linear measurements of that signal.
Abstract: This report demonstrates theoretically and empirically that a greedy algorithm called Orthogonal Matching Pursuit (OMP) can reliably recover a signal with m nonzero entries in dimension d given O(mln d) random linear measurements of that signal. This is a massive improvement over previous results, which require O(m2) measurements. The new results for OMP are comparable with recent results for another approach called Basis Pursuit (BP). In some settings, the OMP algorithm is faster and easier to implement, so it is an attractive alternative to BP for signal recovery problems.

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
Jianwei Ma1
TL;DR: A novel compressed sensing (CS) theory is introduced for the surface metrology to reduce data acquisition and a geometric-wavelet-based recovery algorithm is proposed for scratched and textural surfaces by solving a convex optimal problem with sparse constrained by curvelet transform and wave atom transform.
Abstract: Surface metrology is the science of measuring small-scale features on surfaces. In this paper, a novel compressed sensing (CS) theory is introduced for the surface metrology to reduce data acquisition. We first describe that the CS is naturally fit to surface measurement and analysis. Then, a geometric-wavelet-based recovery algorithm is proposed for scratched and textural surfaces by solving a convex optimal problem with sparse constrained by curvelet transform and wave atom transform. In the framework of compressed measurement, one can stably recover compressible surfaces from incomplete and inaccurate random measurements by using the recovery algorithm. The necessary number of measurements is far fewer than those required by traditional methods that have to obey the Shannon sampling theorem. The compressed metrology essentially shifts online measurement cost to computational cost of offline nonlinear recovery. By combining the idea of sampling, sparsity, and compression, the proposed method indicates a new acquisition protocol and leads to building new measurement instruments. It is very significant for measurements limited by physical constraints, or is extremely expensive. Experiments on engineering and bioengineering surfaces demonstrate good performances of the proposed method.

42 citations

Journal ArticleDOI
TL;DR: It is demonstrated that the proposed CS encoders lead to comparable recovery performance and efficient VLSI architecture designs are proposed for QCAC-CS and $(1,s)$-SRBM encoder designs with reduced area and total power consumption.
Abstract: On-chip neural data compression is an enabling technique for wireless neural interfaces that suffer from insufficient bandwidth and power budgets to transmit the raw data. The data compression algorithm and its implementation should be power and area efficient and functionally reliable over different datasets. Compressed sensing is an emerging technique that has been applied to compress various neurophysiological data. However, the state-of-the-art compressed sensing (CS) encoders leverage random but dense binary measurement matrices, which incur substantial implementation costs on both power and area that could offset the benefits from the reduced wireless data rate. In this paper, we propose two CS encoder designs based on sparse measurement matrices that could lead to efficient hardware implementation. Specifically, two different approaches for the construction of sparse measurement matrices, i.e., the deterministic quasi-cyclic array code (QCAC) matrix and -sparse random binary matrix [-SRBM] are exploited. We demonstrate that the proposed CS encoders lead to comparable recovery performance. And efficient VLSI architecture designs are proposed for QCAC-CS and -SRBM encoders with reduced area and total power consumption.

42 citations

Journal ArticleDOI
TL;DR: In this paper, a 2D pattern-coupled hierarchical Gaussian prior model is proposed to characterize the statistical pattern dependencies among neighboring coefficients, and the generalized approximate message passing (GAMP) algorithm is embedded into the EM framework to efficiently compute an approximation of the posterior distribution of hidden variables, which results in a significant reduction in computational complexity.
Abstract: We consider the problem of recovering two-dimensional (2-D) block-sparse signals with \emph{unknown} cluster patterns. Two-dimensional block-sparse patterns arise naturally in many practical applications such as foreground detection and inverse synthetic aperture radar imaging. To exploit the block-sparse structure, we introduce a 2-D pattern-coupled hierarchical Gaussian prior model to characterize the statistical pattern dependencies among neighboring coefficients. Unlike the conventional hierarchical Gaussian prior model where each coefficient is associated independently with a unique hyperparameter, the pattern-coupled prior for each coefficient not only involves its own hyperparameter, but also its immediate neighboring hyperparameters. Thus the sparsity patterns of neighboring coefficients are related to each other and the hierarchical model has the potential to encourage 2-D structured-sparse solutions. An expectation-maximization (EM) strategy is employed to obtain the maximum a posterior (MAP) estimate of the hyperparameters, along with the posterior distribution of the sparse signal. In addition, the generalized approximate message passing (GAMP) algorithm is embedded into the EM framework to efficiently compute an approximation of the posterior distribution of hidden variables, which results in a significant reduction in computational complexity. Numerical results are provided to illustrate the effectiveness of the proposed algorithm.

42 citations

Journal ArticleDOI
TL;DR: It is shown that multiple measurement vector methods and block sparsity techniques play a fundamental role in improving signal local frequency representations, leading to what is referred to as sparsity-aware quadratic time-frequency distributions.

42 citations

Proceedings ArticleDOI
17 Mar 2010
TL;DR: This paper considers a very general thresholding policy λ(σ) for the algorithm and will use the maxmin framework to tune it optimally and show how one can derive the AMP algorithm from the full message passing algorithm.
Abstract: Finding fast recovery algorithms is a problem of significant interest in compressed sensing In many applications, extremely large problem sizes are envisioned, with at least tens of thousands of equations and hundreds of thousands of unknowns The interior point methods for solving the best studied algorithm -l 1 minimization- are very slow and hence they are ineffective for such problem sizes Faster methods such as LARS or homotopy [1] have been proposed for solving l 1 minimization but there is still a need for algorithms with less computational complexity Therefore there is a fast growing literature on greedy methods and iterative thresholding for finding the sparsest solution It was shown by [2] that most of these algorithms perform worse than `1 when it comes to the sparsity-measurement tradeoff [2] In a recent paper the authors introduced a new algorithm called AMP which is as fast as iterative soft thresholding algorithms proposed before and performs exactly the same as l 1 minimization[3] This algorithm is inspired from the message passing algorithms on bipartite graphs The statistical properties of AMP let the authors propose a theoretical framework to analyze the asymptotic performance of the algorithm This results in very sharp predictions of different observables in the algorithm In this paper we address several questions regarding the thresholding policy We consider a very general thresholding policy λ(σ) for the algorithm and will use the maxmin framework to tune it optimally It is shown that when formulated in this general form, the maxmin thresholding policy is not unique and many different thresholding policies may lead to the same phase transition We will then propose two very simple thresholding policies that can be implemented easily in practice and prove that they are both maxmin This analysis will also shed some light on several other aspects of the algorithm, such as the least favorable distribution and similarity of all maxmin optimal thresholding policies We will also show how one can derive the AMP algorithm from the full message passing algorithm

42 citations

References
More filters
Book
01 Jan 1983

34,729 citations

Book
D.L. Donoho1
01 Jan 2004
TL;DR: It is possible to design n=O(Nlog(m)) nonadaptive measurements allowing reconstruction with accuracy comparable to that attainable with direct knowledge of the N most important coefficients, and a good approximation to those N important coefficients is extracted from the n measurements by solving a linear program-Basis Pursuit in signal processing.
Abstract: Suppose x is an unknown vector in Ropfm (a digital image or signal); we plan to measure n general linear functionals of x and then reconstruct. If x is known to be compressible by transform coding with a known transform, and we reconstruct via the nonlinear procedure defined here, the number of measurements n can be dramatically smaller than the size m. Thus, certain natural classes of images with m pixels need only n=O(m1/4log5/2(m)) nonadaptive nonpixel samples for faithful recovery, as opposed to the usual m pixel samples. More specifically, suppose x has a sparse representation in some orthonormal basis (e.g., wavelet, Fourier) or tight frame (e.g., curvelet, Gabor)-so the coefficients belong to an lscrp ball for 0

18,609 citations

Journal ArticleDOI
TL;DR: Basis Pursuit (BP) is a principle for decomposing a signal into an "optimal" superposition of dictionary elements, where optimal means having the smallest l1 norm of coefficients among all such decompositions.
Abstract: The time-frequency and time-scale communities have recently developed a large number of overcomplete waveform dictionaries --- stationary wavelets, wavelet packets, cosine packets, chirplets, and warplets, to name a few. Decomposition into overcomplete systems is not unique, and several methods for decomposition have been proposed, including the method of frames (MOF), Matching pursuit (MP), and, for special dictionaries, the best orthogonal basis (BOB). Basis Pursuit (BP) is a principle for decomposing a signal into an "optimal" superposition of dictionary elements, where optimal means having the smallest l1 norm of coefficients among all such decompositions. We give examples exhibiting several advantages over MOF, MP, and BOB, including better sparsity and superresolution. BP has interesting relations to ideas in areas as diverse as ill-posed problems, in abstract harmonic analysis, total variation denoising, and multiscale edge denoising. BP in highly overcomplete dictionaries leads to large-scale optimization problems. With signals of length 8192 and a wavelet packet dictionary, one gets an equivalent linear program of size 8192 by 212,992. Such problems can be attacked successfully only because of recent advances in linear programming by interior-point methods. We obtain reasonable success with a primal-dual logarithmic barrier method and conjugate-gradient solver.

9,950 citations

Journal ArticleDOI
TL;DR: The authors introduce an algorithm, called matching pursuit, that decomposes any signal into a linear expansion of waveforms that are selected from a redundant dictionary of functions, chosen in order to best match the signal structures.
Abstract: The authors introduce an algorithm, called matching pursuit, that decomposes any signal into a linear expansion of waveforms that are selected from a redundant dictionary of functions. These waveforms are chosen in order to best match the signal structures. Matching pursuits are general procedures to compute adaptive signal representations. With a dictionary of Gabor functions a matching pursuit defines an adaptive time-frequency transform. They derive a signal energy distribution in the time-frequency plane, which does not include interference terms, unlike Wigner and Cohen class distributions. A matching pursuit isolates the signal structures that are coherent with respect to a given dictionary. An application to pattern extraction from noisy signals is described. They compare a matching pursuit decomposition with a signal expansion over an optimized wavepacket orthonormal basis, selected with the algorithm of Coifman and Wickerhauser see (IEEE Trans. Informat. Theory, vol. 38, Mar. 1992). >

9,380 citations

Journal ArticleDOI
TL;DR: A publicly available algorithm that requires only the same order of magnitude of computational effort as ordinary least squares applied to the full set of covariates is described.
Abstract: The purpose of model selection algorithms such as All Subsets, Forward Selection and Backward Elimination is to choose a linear model on the basis of the same set of data to which the model will be applied. Typically we have available a large collection of possible covariates from which we hope to select a parsimonious set for the efficient prediction of a response variable. Least Angle Regression (LARS), a new model selection algorithm, is a useful and less greedy version of traditional forward selection methods. Three main properties are derived: (1) A simple modification of the LARS algorithm implements the Lasso, an attractive version of ordinary least squares that constrains the sum of the absolute regression coefficients; the LARS modification calculates all possible Lasso estimates for a given problem, using an order of magnitude less computer time than previous methods. (2) A different LARS modification efficiently implements Forward Stagewise linear regression, another promising new model selection method; this connection explains the similar numerical results previously observed for the Lasso and Stagewise, and helps us understand the properties of both methods, which are seen as constrained versions of the simpler LARS algorithm. (3) A simple approximation for the degrees of freedom of a LARS estimate is available, from which we derive a Cp estimate of prediction error; this allows a principled choice among the range of possible LARS estimates. LARS and its variants are computationally efficient: the paper describes a publicly available algorithm that requires only the same order of magnitude of computational effort as ordinary least squares applied to the full set of covariates.

7,828 citations