scispace - formally typeset
Search or ask a question

Signal Recovery from Random Measurements Via Orthogonal Matching Pursuit: The Gaussian Case

TL;DR: In this paper, a greedy algorithm called Orthogonal Matching Pursuit (OMP) was proposed to recover a signal with m nonzero entries in dimension 1 given O(m n d) random linear measurements of that signal.
Abstract: This report demonstrates theoretically and empirically that a greedy algorithm called Orthogonal Matching Pursuit (OMP) can reliably recover a signal with m nonzero entries in dimension d given O(mln d) random linear measurements of that signal. This is a massive improvement over previous results, which require O(m2) measurements. The new results for OMP are comparable with recent results for another approach called Basis Pursuit (BP). In some settings, the OMP algorithm is faster and easier to implement, so it is an attractive alternative to BP for signal recovery problems.

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
TL;DR: This paper empirically investigate the NP-hard problem of finding sparsest solutions to linear equation systems, i.e., solutions with as few nonzeros as possible, using a branch-and-cut approach via the maximum feasible subsystem problem to compute optimal solutions for small instances and investigate the uniqueness of the optimal solutions.
Abstract: In this paper, we empirically investigate the NP-hard problem of finding sparsest solutions to linear equation systems, i.e., solutions with as few nonzeros as possible. This problem has recently received considerable interest in the sparse approximation and signal processing literature. We use a branch-and-cut approach via the maximum feasible subsystem problem to compute optimal solutions for small instances and investigate the uniqueness of the optimal solutions. We furthermore discuss six (modifications of) heuristics for this problem that appear in different parts of the literature. For small instances, the exact optimal solutions allow us to evaluate the quality of the heuristics, while for larger instances we compare their relative performance. One outcome is that the so-called basis pursuit heuristic performs worse, compared to the other methods. Among the best heuristics are a method due to Mangasarian and one due to Chinneck.

58 citations

Journal ArticleDOI
TL;DR: Experimental results demonstrate that: 1) the proposed algorithm outperforms the map-drift algorithm and the phase gradient autofocus algorithm in terms of the imaging quality and 2) compared to the iterative minimum-entropy aut ofocus, the proposed algorithms produces the comparative imaging quality with less computational complexity in complex motion environment.
Abstract: A method of motion status estimation of airborne synthetic aperture radar (SAR) platform in short subapertures via parametric sparse representation is proposed for high-resolution SAR image autofocusing. The SAR echo is formulated as a jointly sparse signal through a parametric dictionary matrix, which converts the problem of SAR motion status estimation into a problem of dynamic representation of jointly sparse signals. A full synthetic aperture is decomposed into several subapertures to estimate the dynamic motion parameters of a platform, and SAR motion compensation is achieved by refining the estimation of the equivalent platform motion parameters, i.e., the azimuth velocity and the radial acceleration of the radar platform, at each subaperture in an iterative fashion. Experimental results based on both simulated and real data demonstrate that: 1) the proposed algorithm outperforms the map-drift algorithm and the phase gradient autofocus algorithm in terms of the imaging quality and 2) compared to the iterative minimum-entropy autofocus, the proposed algorithm produces the comparative imaging quality with less computational complexity in complex motion environment.

58 citations

Journal ArticleDOI
TL;DR: This paper proposes an alternative compressive sensing based approach to exploit the sparsity of simultaneous touches with respect to the number of sensor nodes to achieve similar levels of responsiveness.
Abstract: Capacitive touch screens are ubiquitous in today's electronic devices. Improved touch screen responsiveness and resolution can be achieved at the expense of the touch screen controller analog hardware complexity and power consumption. This paper proposes an alternative compressive sensing based approach to exploit the sparsity of simultaneous touches with respect to the number of sensor nodes to achieve similar levels of responsiveness. It is possible to reduce the analog data acquisition complexity at the cost of extra digital computations with less total power consumption. Using compressive sensing, in order to resolve the positions of the sparse touches, the number of measurements required is related to the number of touches rather than the number of nodes. Detailed measurement circuits and methodologies are presented along with the corresponding reconstruction algorithm.

58 citations

Journal ArticleDOI
TL;DR: This tutorial provides an inductive way through this complex field to researchers and practitioners starting from the basics of sparse signal processing up to the most recent and up-to-date methods and signal processing applications.
Abstract: Sparse signals are characterized by a few nonzero coefficients in one of their transformation domains. This was the main premise in designing signal compression algorithms. Compressive sensing as a new approach employs the sparsity property as a precondition for signal recovery. Sparse signals can be fully reconstructed from a reduced set of available measurements. The description and basic definitions of sparse signals, along with the conditions for their reconstruction, are discussed in the first part of this paper. The numerous algorithms developed for the sparse signals reconstruction are divided into three classes. The first one is based on the principle of matching components. Analysis of noise and nonsparsity influence on reconstruction performance is provided. The second class of reconstruction algorithms is based on the constrained convex form of problem formulation where linear programming and regression methods can be used to find a solution. The third class of recovery algorithms is based on the Bayesian approach. Applications of the considered approaches are demonstrated through various illustrative and signal processing examples, using common transformation and observation matrices. With pseudocodes of the presented algorithms and compressive sensing principles illustrated on simple signal processing examples, this tutorial provides an inductive way through this complex field to researchers and practitioners starting from the basics of sparse signal processing up to the most recent and up-to-date methods and signal processing applications.

57 citations

Journal ArticleDOI
TL;DR: Based on the linear relationship between the chirp rate of cross-range inverse synthetic aperture radar (ISAR) signal and the slant range, a parametric sparse representation method is proposed for ISAR imaging of rotating targets.
Abstract: Based on the linear relationship between the chirp rate of cross-range inverse synthetic aperture radar (ISAR) signal and the slant range, a parametric sparse representation method is proposed for ISAR imaging of rotating targets. The ISAR echo is formulated as a parametric joint-sparse signal and the chirp rates at all range bins are estimated by maximizing the contrast of sparse ISAR image. Comparing with homologous algorithms, the computational complexity of the proposed method is significantly reduced.

57 citations

References
More filters
Book
01 Jan 1983

34,729 citations

Book
D.L. Donoho1
01 Jan 2004
TL;DR: It is possible to design n=O(Nlog(m)) nonadaptive measurements allowing reconstruction with accuracy comparable to that attainable with direct knowledge of the N most important coefficients, and a good approximation to those N important coefficients is extracted from the n measurements by solving a linear program-Basis Pursuit in signal processing.
Abstract: Suppose x is an unknown vector in Ropfm (a digital image or signal); we plan to measure n general linear functionals of x and then reconstruct. If x is known to be compressible by transform coding with a known transform, and we reconstruct via the nonlinear procedure defined here, the number of measurements n can be dramatically smaller than the size m. Thus, certain natural classes of images with m pixels need only n=O(m1/4log5/2(m)) nonadaptive nonpixel samples for faithful recovery, as opposed to the usual m pixel samples. More specifically, suppose x has a sparse representation in some orthonormal basis (e.g., wavelet, Fourier) or tight frame (e.g., curvelet, Gabor)-so the coefficients belong to an lscrp ball for 0

18,609 citations

Journal ArticleDOI
TL;DR: Basis Pursuit (BP) is a principle for decomposing a signal into an "optimal" superposition of dictionary elements, where optimal means having the smallest l1 norm of coefficients among all such decompositions.
Abstract: The time-frequency and time-scale communities have recently developed a large number of overcomplete waveform dictionaries --- stationary wavelets, wavelet packets, cosine packets, chirplets, and warplets, to name a few. Decomposition into overcomplete systems is not unique, and several methods for decomposition have been proposed, including the method of frames (MOF), Matching pursuit (MP), and, for special dictionaries, the best orthogonal basis (BOB). Basis Pursuit (BP) is a principle for decomposing a signal into an "optimal" superposition of dictionary elements, where optimal means having the smallest l1 norm of coefficients among all such decompositions. We give examples exhibiting several advantages over MOF, MP, and BOB, including better sparsity and superresolution. BP has interesting relations to ideas in areas as diverse as ill-posed problems, in abstract harmonic analysis, total variation denoising, and multiscale edge denoising. BP in highly overcomplete dictionaries leads to large-scale optimization problems. With signals of length 8192 and a wavelet packet dictionary, one gets an equivalent linear program of size 8192 by 212,992. Such problems can be attacked successfully only because of recent advances in linear programming by interior-point methods. We obtain reasonable success with a primal-dual logarithmic barrier method and conjugate-gradient solver.

9,950 citations

Journal ArticleDOI
TL;DR: The authors introduce an algorithm, called matching pursuit, that decomposes any signal into a linear expansion of waveforms that are selected from a redundant dictionary of functions, chosen in order to best match the signal structures.
Abstract: The authors introduce an algorithm, called matching pursuit, that decomposes any signal into a linear expansion of waveforms that are selected from a redundant dictionary of functions. These waveforms are chosen in order to best match the signal structures. Matching pursuits are general procedures to compute adaptive signal representations. With a dictionary of Gabor functions a matching pursuit defines an adaptive time-frequency transform. They derive a signal energy distribution in the time-frequency plane, which does not include interference terms, unlike Wigner and Cohen class distributions. A matching pursuit isolates the signal structures that are coherent with respect to a given dictionary. An application to pattern extraction from noisy signals is described. They compare a matching pursuit decomposition with a signal expansion over an optimized wavepacket orthonormal basis, selected with the algorithm of Coifman and Wickerhauser see (IEEE Trans. Informat. Theory, vol. 38, Mar. 1992). >

9,380 citations

Journal ArticleDOI
TL;DR: A publicly available algorithm that requires only the same order of magnitude of computational effort as ordinary least squares applied to the full set of covariates is described.
Abstract: The purpose of model selection algorithms such as All Subsets, Forward Selection and Backward Elimination is to choose a linear model on the basis of the same set of data to which the model will be applied. Typically we have available a large collection of possible covariates from which we hope to select a parsimonious set for the efficient prediction of a response variable. Least Angle Regression (LARS), a new model selection algorithm, is a useful and less greedy version of traditional forward selection methods. Three main properties are derived: (1) A simple modification of the LARS algorithm implements the Lasso, an attractive version of ordinary least squares that constrains the sum of the absolute regression coefficients; the LARS modification calculates all possible Lasso estimates for a given problem, using an order of magnitude less computer time than previous methods. (2) A different LARS modification efficiently implements Forward Stagewise linear regression, another promising new model selection method; this connection explains the similar numerical results previously observed for the Lasso and Stagewise, and helps us understand the properties of both methods, which are seen as constrained versions of the simpler LARS algorithm. (3) A simple approximation for the degrees of freedom of a LARS estimate is available, from which we derive a Cp estimate of prediction error; this allows a principled choice among the range of possible LARS estimates. LARS and its variants are computationally efficient: the paper describes a publicly available algorithm that requires only the same order of magnitude of computational effort as ordinary least squares applied to the full set of covariates.

7,828 citations