scispace - formally typeset
Search or ask a question

Signal Recovery from Random Measurements Via Orthogonal Matching Pursuit: The Gaussian Case

TL;DR: In this paper, a greedy algorithm called Orthogonal Matching Pursuit (OMP) was proposed to recover a signal with m nonzero entries in dimension 1 given O(m n d) random linear measurements of that signal.
Abstract: This report demonstrates theoretically and empirically that a greedy algorithm called Orthogonal Matching Pursuit (OMP) can reliably recover a signal with m nonzero entries in dimension d given O(mln d) random linear measurements of that signal. This is a massive improvement over previous results, which require O(m2) measurements. The new results for OMP are comparable with recent results for another approach called Basis Pursuit (BP). In some settings, the OMP algorithm is faster and easier to implement, so it is an attractive alternative to BP for signal recovery problems.

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
TL;DR: The proposed iterative channel estimation algorithm based on the least square estimation (LSE) and sparse message passing (SMP) algorithm for the millimeter wave (mmWave) MIMO systems has much better performance than the existing sparse estimators, especially when the channel is sparse.
Abstract: We propose an iterative channel estimation algorithm based on the least square estimation (LSE) and sparse message passing (SMP) algorithm for the millimeter wave (mmWave) MIMO systems. The channel coefficients of the mmWave MIMO are approximately modeled as a Bernoulli–Gaussian distribution and the channel matrix is sparse with only a few nonzero entries. By leveraging the advantage of sparseness, we propose an algorithm that iteratively detects the exact locations and values of nonzero entries of the sparse channel matrix. At each iteration, the locations are detected by the SMP, and values are estimated with the LSE. We also analyze the Cramer–Rao Lower Bound (CLRB), and show that the proposed algorithm is a minimum variance unbiased estimator under the assumption that we have the partial priori knowledge of the channel. Furthermore, we employ the Gaussian approximation for message densities under density evolution to simplify the analysis of the algorithm, which provides a simple method to predict the performance of the proposed algorithm. Numerical experiments show that the proposed algorithm has much better performance than the existing sparse estimators, especially when the channel is sparse. In addition, our proposed algorithm converges to the CRLB of the genie-aided estimation of sparse channels with only five turbo iterations.

140 citations

Journal ArticleDOI
TL;DR: It is shown empirically that the in-crowd algorithm is faster than the best alternative solvers (homotopy, fixed point continuation and spectral projected gradient for l1 minimization) on certain well- and ill-conditioned sparse problems with more than 1000 unknowns.
Abstract: We introduce a fast method, the “in-crowd” algorithm, for finding the exact solution to basis pursuit denoising problems. The in-crowd algorithm discovers a sequence of subspaces guaranteed to arrive at the support set of the final solution of l1 -regularized least squares problems. We provide theorems showing that the in-crowd algorithm always converges to the correct global solution to basis pursuit denoising problems. We show empirically that the in-crowd algorithm is faster than the best alternative solvers (homotopy, fixed point continuation and spectral projected gradient for l1 minimization) on certain well- and ill-conditioned sparse problems with more than 1000 unknowns. We compare the in-crowd algorithm's performance in high- and low-noise regimes, demonstrate its performance on more dense problems, and derive expressions giving its computational complexity.

139 citations

Book ChapterDOI
02 Jul 2006
TL;DR: The results prove that there exists a single O(klogn)timesn measurement matrix such that any such signal can be reconstructed from these measurements, with error at most O(1) times the worst case error for the class of such signals.
Abstract: In sparse approximation theory, the fundamental problem is to reconstruct a signal A∈ℝn from linear measurements 〈Aψi〉 with respect to a dictionary of ψi's. Recently, there is focus on the novel direction of Compressed Sensing [9] where the reconstruction can be done with very few—O(k logn)—linear measurements over a modified dictionary if the signal is compressible, that is, its information is concentrated in k coefficients with the original dictionary. In particular, these results [9, 4, 23] prove that there exists a single O(k logn) ×n measurement matrix such that any such signal can be reconstructed from these measurements, with error at most O(1) times the worst case error for the class of such signals. Compressed sensing has generated tremendous excitement both because of the sophisticated underlying Mathematics and because of its potential applications In this paper, we address outstanding open problems in Compressed Sensing. Our main result is an explicit construction of a non-adaptive measurement matrix and the corresponding reconstruction algorithm so that with a number of measurements polynomial in k, logn, 1/e, we can reconstruct compressible signals. This is the first known polynomial time explicit construction of any such measurement matrix. In addition, our result improves the error guarantee from O(1) to 1 + e and improves the reconstruction time from poly(n) to poly(k logn) Our second result is a randomized construction of O(kpolylog (n)) measurements that work for each signal with high probability and gives per-instance approximation guarantees rather than over the class of all signals. Previous work on Compressed Sensing does not provide such per-instance approximation guarantees; our result improves the best known number of measurements known from prior work in other areas including Learning Theory [20, 21], Streaming algorithms [11, 12, 6] and Complexity Theory [1] for this case Our approach is combinatorial. In particular, we use two parallel sets of group tests, one to filter and the other to certify and estimate; the resulting algorithms are quite simple to implement

138 citations

Journal ArticleDOI
TL;DR: A novel Hierarchical Data Aggregation method using Compressive Sensing (HDACS) is presented, which combines a hierarchical network configuration with CS to optimize the amount of data transmitted and formulate a new energy model by factoring in both processor and radio energy consumption into the cost.
Abstract: Energy efficiency is one of the key objectives in data gathering in wireless sensor networks (WSNs). Recent research on energy-efficient data gathering in WSNs has explored the use of Compressive Sensing (CS) to parsimoniously represent the data. However, the performance of CS-based data gathering methods has been limited since the approaches failed to take advantage of judicious network configurations and effective CS-based data aggregation procedures. In this article, a novel Hierarchical Data Aggregation method using Compressive Sensing (HDACS) is presented, which combines a hierarchical network configuration with CS. Our key idea is to set multiple compression thresholds adaptively based on cluster sizes at different levels of the data aggregation tree to optimize the amount of data transmitted. The advantages of the proposed model in terms of the total amount of data transmitted and data compression ratio are analytically verified. Moreover, we formulate a new energy model by factoring in both processor and radio energy consumption into the cost, especially the computation cost incurred in relatively complex algorithms. We also show that communication cost remains dominant in data aggregation in the practical applications of large-scale networks. We use both the real-world data and synthetic datasets to test CS-based data aggregation schemes on the SIDnet-SWANS simulation platform. The simulation results demonstrate that the proposed HDACS model guarantees accurate signal recovery performance. It also provides substantial energy savings compared with existing methods.

138 citations

Proceedings ArticleDOI
26 Aug 2007
TL;DR: In this article, a simple encoder with a 2D FFT and a random sampler is used to compress the raw SAR data by sampling the signal below Nyquist rate using ideas from Compressed Sensing.
Abstract: Synthetic Aperture Radar (SAR) is active and coherent microwave high resolution imaging system, which has the capability to image in all weather and day-night conditions. SAR transmits chirp signals and the received echoes are sampled into In-phase (I) and Quadrature (Q) components, generally referred to as raw SAR data. The various modes of SAR coupled with the high resolution and wide swath requirements result in a huge amount of data, which will easily exceed the on-board storage and downlink bandwidth of a satellite. This paper addresses the compression of the raw SAR data by sampling the signal below Nyquist rate using ideas from Compressed Sensing (CS). Due to the low computational resources available onboard satellite, the idea is to use a simple encoder, with a 2D FFT and a random sampler. Decoding is then based on convex optimization or uses greedy algorithms such as Orthogonal Matching Pursuit (OMP).

138 citations

References
More filters
Book
01 Jan 1983

34,729 citations

Book
D.L. Donoho1
01 Jan 2004
TL;DR: It is possible to design n=O(Nlog(m)) nonadaptive measurements allowing reconstruction with accuracy comparable to that attainable with direct knowledge of the N most important coefficients, and a good approximation to those N important coefficients is extracted from the n measurements by solving a linear program-Basis Pursuit in signal processing.
Abstract: Suppose x is an unknown vector in Ropfm (a digital image or signal); we plan to measure n general linear functionals of x and then reconstruct. If x is known to be compressible by transform coding with a known transform, and we reconstruct via the nonlinear procedure defined here, the number of measurements n can be dramatically smaller than the size m. Thus, certain natural classes of images with m pixels need only n=O(m1/4log5/2(m)) nonadaptive nonpixel samples for faithful recovery, as opposed to the usual m pixel samples. More specifically, suppose x has a sparse representation in some orthonormal basis (e.g., wavelet, Fourier) or tight frame (e.g., curvelet, Gabor)-so the coefficients belong to an lscrp ball for 0

18,609 citations

Journal ArticleDOI
TL;DR: Basis Pursuit (BP) is a principle for decomposing a signal into an "optimal" superposition of dictionary elements, where optimal means having the smallest l1 norm of coefficients among all such decompositions.
Abstract: The time-frequency and time-scale communities have recently developed a large number of overcomplete waveform dictionaries --- stationary wavelets, wavelet packets, cosine packets, chirplets, and warplets, to name a few. Decomposition into overcomplete systems is not unique, and several methods for decomposition have been proposed, including the method of frames (MOF), Matching pursuit (MP), and, for special dictionaries, the best orthogonal basis (BOB). Basis Pursuit (BP) is a principle for decomposing a signal into an "optimal" superposition of dictionary elements, where optimal means having the smallest l1 norm of coefficients among all such decompositions. We give examples exhibiting several advantages over MOF, MP, and BOB, including better sparsity and superresolution. BP has interesting relations to ideas in areas as diverse as ill-posed problems, in abstract harmonic analysis, total variation denoising, and multiscale edge denoising. BP in highly overcomplete dictionaries leads to large-scale optimization problems. With signals of length 8192 and a wavelet packet dictionary, one gets an equivalent linear program of size 8192 by 212,992. Such problems can be attacked successfully only because of recent advances in linear programming by interior-point methods. We obtain reasonable success with a primal-dual logarithmic barrier method and conjugate-gradient solver.

9,950 citations

Journal ArticleDOI
TL;DR: The authors introduce an algorithm, called matching pursuit, that decomposes any signal into a linear expansion of waveforms that are selected from a redundant dictionary of functions, chosen in order to best match the signal structures.
Abstract: The authors introduce an algorithm, called matching pursuit, that decomposes any signal into a linear expansion of waveforms that are selected from a redundant dictionary of functions. These waveforms are chosen in order to best match the signal structures. Matching pursuits are general procedures to compute adaptive signal representations. With a dictionary of Gabor functions a matching pursuit defines an adaptive time-frequency transform. They derive a signal energy distribution in the time-frequency plane, which does not include interference terms, unlike Wigner and Cohen class distributions. A matching pursuit isolates the signal structures that are coherent with respect to a given dictionary. An application to pattern extraction from noisy signals is described. They compare a matching pursuit decomposition with a signal expansion over an optimized wavepacket orthonormal basis, selected with the algorithm of Coifman and Wickerhauser see (IEEE Trans. Informat. Theory, vol. 38, Mar. 1992). >

9,380 citations

Journal ArticleDOI
TL;DR: A publicly available algorithm that requires only the same order of magnitude of computational effort as ordinary least squares applied to the full set of covariates is described.
Abstract: The purpose of model selection algorithms such as All Subsets, Forward Selection and Backward Elimination is to choose a linear model on the basis of the same set of data to which the model will be applied. Typically we have available a large collection of possible covariates from which we hope to select a parsimonious set for the efficient prediction of a response variable. Least Angle Regression (LARS), a new model selection algorithm, is a useful and less greedy version of traditional forward selection methods. Three main properties are derived: (1) A simple modification of the LARS algorithm implements the Lasso, an attractive version of ordinary least squares that constrains the sum of the absolute regression coefficients; the LARS modification calculates all possible Lasso estimates for a given problem, using an order of magnitude less computer time than previous methods. (2) A different LARS modification efficiently implements Forward Stagewise linear regression, another promising new model selection method; this connection explains the similar numerical results previously observed for the Lasso and Stagewise, and helps us understand the properties of both methods, which are seen as constrained versions of the simpler LARS algorithm. (3) A simple approximation for the degrees of freedom of a LARS estimate is available, from which we derive a Cp estimate of prediction error; this allows a principled choice among the range of possible LARS estimates. LARS and its variants are computationally efficient: the paper describes a publicly available algorithm that requires only the same order of magnitude of computational effort as ordinary least squares applied to the full set of covariates.

7,828 citations