scispace - formally typeset
Search or ask a question

Signal Recovery from Random Measurements Via Orthogonal Matching Pursuit: The Gaussian Case

TL;DR: In this paper, a greedy algorithm called Orthogonal Matching Pursuit (OMP) was proposed to recover a signal with m nonzero entries in dimension 1 given O(m n d) random linear measurements of that signal.
Abstract: This report demonstrates theoretically and empirically that a greedy algorithm called Orthogonal Matching Pursuit (OMP) can reliably recover a signal with m nonzero entries in dimension d given O(mln d) random linear measurements of that signal. This is a massive improvement over previous results, which require O(m2) measurements. The new results for OMP are comparable with recent results for another approach called Basis Pursuit (BP). In some settings, the OMP algorithm is faster and easier to implement, so it is an attractive alternative to BP for signal recovery problems.

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
TL;DR: This letter presents an iterative order recursive least square (IORLS) algorithm, which can exploit the frame-wise sparsity and increase accuracy and substantially reduces complexity by avoiding the matrix inversions in OMP and GOMP algorithms.
Abstract: In multiple measurement vectors (MMV) problems, the sparsity structure, i.e., the support of the measurement vectors, remains constant for multiple instants. For machine type communication (MTC) context, this sparsity structure may remain constant over all symbols in a frame, which can be termed as frame-wise sparsity. Instead of employing symbol-by-symbol detection based on algorithms such as orthogonal matching pursuit (OMP), group orthogonal matching pursuit (GOMP) can take advantage of this constant sparsity structure and decodes group of symbols together in order to improve the accuracy. Unfortunately, the exponential growth in computational complexity of the GOMP algorithm with the group size prohibits it from increasing the group size and fully exploiting the frame-wise sparsity. This letter presents an iterative order recursive least square (IORLS) algorithm, which can exploit the frame-wise sparsity and increase accuracy. IORLS iteratively employs a modified OMP operations over a frame to gather the sparsity support information with manageable complexity. IORLS substantially reduces complexity by avoiding the matrix inversions in OMP and GOMP algorithms. Furthermore, it has been shown that the proposed algorithm is robust against noise, achieving near-oracle estimation performance.

67 citations

Journal ArticleDOI
14 Jul 2011
TL;DR: An efficient learning framework to construct signal dictionaries for sparse representation by selecting the dictionary columns from multiple candidate bases is developed and it is shown that if the available dictionary column vectors are incoherent, the objective function satisfies approximate submodularity.
Abstract: We develop an efficient learning framework to construct signal dictionaries for sparse representation by selecting the dictionary columns from multiple candidate bases. By sparse, we mean that only a few dictionary elements, compared to the ambient signal dimension, can exactly represent or well-approximate the signals of interest. We formulate both the selection of the dictionary columns and the sparse representation of signals as a joint combinatorial optimization problem. The proposed combinatorial objective maximizes variance reduction over the set of training signals by constraining the size of the dictionary as well as the number of dictionary columns that can be used to represent each signal. We show that if the available dictionary column vectors are incoherent, our objective function satisfies approximate submodularity. We exploit this property to develop SDSOMP and SDSMA, two greedy algorithms with approximation guarantees. We also describe how our learning framework enables dictionary selection for structured sparse representations, e.g., where the sparse coefficients occur in restricted patterns. We evaluate our approach on synthetic signals and natural images for representation and inpainting problems.

66 citations

Journal ArticleDOI
TL;DR: For two extreme cases of the vector shape, it is shown that, with high probability on the draw of random measurements, a fixed sparse vector is robustly recovered in a number of iterations precisely equal to the sparsity level.

66 citations

Journal ArticleDOI
TL;DR: A novel approach based on compressive sensing and chaos is proposed for simultaneously compressing, fusing and encrypting multi-modal images that reduces data volume but also simplifies keys, which improves the efficiency of transmitting data and distributing keys.

66 citations

Journal ArticleDOI
TL;DR: Reconfigurable, parallel, and pipelined architectures for three algorithms including OMP, tOMP, and GDOMP which can reconstruct different data vector sizes ranging from 128 to 1024, on 65 nm CMOS technology operating at 1 V supply voltage are implemented.
Abstract: Orthogonal Matching Pursuit (OMP) is an important compressive sensing (CS) recovery and sparsity inducing algorithm, which has potential in various emerging applications ranging from wearable and mobile computing to real-time analytics processing on servers. Thus application aware OMP algorithm implementation is important. In this paper, we propose two different modifications to OMP algorithm named Thresholding technique for OMP (tOMP) and Gradient Descent OMP (GDOMP) to reduce hardware complexity of OMP algorithm. tOMP modifies identification stage of OMP algorithm to reduce reconstruction time and GDOMP modifies residual update phase to reduce chip area. To demonstrate reconstruction efficiency of proposed OMP modifications, we compare signal-to-reconstruction error rate (SRER), signal-to-noise ratio (PSNR), and Structural Similarity index (SSIM) of previously proposed matching pursuit algorithms such as Subspace Pursuit (SP), Look Ahead OMP (LAOMP), and OMP, with tOMP, and GDOMP. We implemented reconfigurable, parallel, and pipelined architectures for three algorithms including OMP, tOMP, and GDOMP which can reconstruct different data vector sizes ranging from 128 to 1024, on 65 nm CMOS technology operating at 1 V supply voltage. The post place and route analysis on area, power, and latency show that, tOMP requires 33% less reconstruction time, and GDOMP consumes 44% less chip area when compared to OMP ASIC implementation. Compared to previously published work, the proposed architectures achieve 2.1 times improvement in Area-Delay product (ADP) and consume 40% less energy.

66 citations

References
More filters
Book
01 Jan 1983

34,729 citations

Book
D.L. Donoho1
01 Jan 2004
TL;DR: It is possible to design n=O(Nlog(m)) nonadaptive measurements allowing reconstruction with accuracy comparable to that attainable with direct knowledge of the N most important coefficients, and a good approximation to those N important coefficients is extracted from the n measurements by solving a linear program-Basis Pursuit in signal processing.
Abstract: Suppose x is an unknown vector in Ropfm (a digital image or signal); we plan to measure n general linear functionals of x and then reconstruct. If x is known to be compressible by transform coding with a known transform, and we reconstruct via the nonlinear procedure defined here, the number of measurements n can be dramatically smaller than the size m. Thus, certain natural classes of images with m pixels need only n=O(m1/4log5/2(m)) nonadaptive nonpixel samples for faithful recovery, as opposed to the usual m pixel samples. More specifically, suppose x has a sparse representation in some orthonormal basis (e.g., wavelet, Fourier) or tight frame (e.g., curvelet, Gabor)-so the coefficients belong to an lscrp ball for 0

18,609 citations

Journal ArticleDOI
TL;DR: Basis Pursuit (BP) is a principle for decomposing a signal into an "optimal" superposition of dictionary elements, where optimal means having the smallest l1 norm of coefficients among all such decompositions.
Abstract: The time-frequency and time-scale communities have recently developed a large number of overcomplete waveform dictionaries --- stationary wavelets, wavelet packets, cosine packets, chirplets, and warplets, to name a few. Decomposition into overcomplete systems is not unique, and several methods for decomposition have been proposed, including the method of frames (MOF), Matching pursuit (MP), and, for special dictionaries, the best orthogonal basis (BOB). Basis Pursuit (BP) is a principle for decomposing a signal into an "optimal" superposition of dictionary elements, where optimal means having the smallest l1 norm of coefficients among all such decompositions. We give examples exhibiting several advantages over MOF, MP, and BOB, including better sparsity and superresolution. BP has interesting relations to ideas in areas as diverse as ill-posed problems, in abstract harmonic analysis, total variation denoising, and multiscale edge denoising. BP in highly overcomplete dictionaries leads to large-scale optimization problems. With signals of length 8192 and a wavelet packet dictionary, one gets an equivalent linear program of size 8192 by 212,992. Such problems can be attacked successfully only because of recent advances in linear programming by interior-point methods. We obtain reasonable success with a primal-dual logarithmic barrier method and conjugate-gradient solver.

9,950 citations

Journal ArticleDOI
TL;DR: The authors introduce an algorithm, called matching pursuit, that decomposes any signal into a linear expansion of waveforms that are selected from a redundant dictionary of functions, chosen in order to best match the signal structures.
Abstract: The authors introduce an algorithm, called matching pursuit, that decomposes any signal into a linear expansion of waveforms that are selected from a redundant dictionary of functions. These waveforms are chosen in order to best match the signal structures. Matching pursuits are general procedures to compute adaptive signal representations. With a dictionary of Gabor functions a matching pursuit defines an adaptive time-frequency transform. They derive a signal energy distribution in the time-frequency plane, which does not include interference terms, unlike Wigner and Cohen class distributions. A matching pursuit isolates the signal structures that are coherent with respect to a given dictionary. An application to pattern extraction from noisy signals is described. They compare a matching pursuit decomposition with a signal expansion over an optimized wavepacket orthonormal basis, selected with the algorithm of Coifman and Wickerhauser see (IEEE Trans. Informat. Theory, vol. 38, Mar. 1992). >

9,380 citations

Journal ArticleDOI
TL;DR: A publicly available algorithm that requires only the same order of magnitude of computational effort as ordinary least squares applied to the full set of covariates is described.
Abstract: The purpose of model selection algorithms such as All Subsets, Forward Selection and Backward Elimination is to choose a linear model on the basis of the same set of data to which the model will be applied. Typically we have available a large collection of possible covariates from which we hope to select a parsimonious set for the efficient prediction of a response variable. Least Angle Regression (LARS), a new model selection algorithm, is a useful and less greedy version of traditional forward selection methods. Three main properties are derived: (1) A simple modification of the LARS algorithm implements the Lasso, an attractive version of ordinary least squares that constrains the sum of the absolute regression coefficients; the LARS modification calculates all possible Lasso estimates for a given problem, using an order of magnitude less computer time than previous methods. (2) A different LARS modification efficiently implements Forward Stagewise linear regression, another promising new model selection method; this connection explains the similar numerical results previously observed for the Lasso and Stagewise, and helps us understand the properties of both methods, which are seen as constrained versions of the simpler LARS algorithm. (3) A simple approximation for the degrees of freedom of a LARS estimate is available, from which we derive a Cp estimate of prediction error; this allows a principled choice among the range of possible LARS estimates. LARS and its variants are computationally efficient: the paper describes a publicly available algorithm that requires only the same order of magnitude of computational effort as ordinary least squares applied to the full set of covariates.

7,828 citations