scispace - formally typeset
Search or ask a question

Signal Recovery from Random Measurements Via Orthogonal Matching Pursuit: The Gaussian Case

TL;DR: In this paper, a greedy algorithm called Orthogonal Matching Pursuit (OMP) was proposed to recover a signal with m nonzero entries in dimension 1 given O(m n d) random linear measurements of that signal.
Abstract: This report demonstrates theoretically and empirically that a greedy algorithm called Orthogonal Matching Pursuit (OMP) can reliably recover a signal with m nonzero entries in dimension d given O(mln d) random linear measurements of that signal. This is a massive improvement over previous results, which require O(m2) measurements. The new results for OMP are comparable with recent results for another approach called Basis Pursuit (BP). In some settings, the OMP algorithm is faster and easier to implement, so it is an attractive alternative to BP for signal recovery problems.

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
TL;DR: In one and two dimensional numerical SR examples, the local optimal solutions from difference of convex function algorithms outperform the global solutions near or below Rayleigh length scales either in the accuracy of ground truth recovery or in finding a sparse solution satisfying the constraints more accurately.
Abstract: We study the super-resolution (SR) problem of recovering point sources consisting of a collection of isolated and suitably separated spikes from only the low frequency measurements. If the peak separation is above a factor in (1, 2) of the Rayleigh length (physical resolution limit), $$L_1$$L1 minimization is guaranteed to recover such sparse signals. However, below such critical length scale, especially the Rayleigh length, the $$L_1$$L1 certificate no longer exists. We show several local properties (local minimum, directional stationarity, and sparsity) of the limit points of minimizing two $$L_1$$L1 based nonconvex penalties, the difference of $$L_1$$L1 and $$L_2$$L2 norms ($$L_{1-2}$$L1-2) and capped $$L_1$$L1 (C$$L_1$$L1), subject to the measurement constraints. In one and two dimensional numerical SR examples, the local optimal solutions from difference of convex function algorithms outperform the global $$L_1$$L1 solutions near or below Rayleigh length scales either in the accuracy of ground truth recovery or in finding a sparse solution satisfying the constraints more accurately.

46 citations

Journal ArticleDOI
TL;DR: A structured sparsity-based hyperspectral blind compressive sensing method that outperforms several state-of-the-art HCS methods in terms of the reconstruction accuracy achieved and is robust to noise corruption in the measurements.
Abstract: The ability to accurately represent a hyperspectral image (HSI) as a combination of a small number of elements from an appropriate dictionary underpins much of the recent progress in hyperspectral compressive sensing (HCS). Preserving structure in the sparse representation is critical to achieving an accurate reconstruction but has thus far only been partially exploited because existing methods assume a predefined dictionary. To address this problem, a structured sparsity-based hyperspectral blind compressive sensing method is presented in this study. For the reconstructed HSI, a data-adaptive dictionary is learned directly from its noisy measurements, which promotes the underlying structured sparsity and obviously improves reconstruction accuracy. Specifically, a fully structured dictionary prior is first proposed to jointly depict the structure in each dictionary atom as well as the correlation between atoms, where the magnitude of each atom is also regularized. Then, a reweighted Laplace prior is employed to model the structured sparsity in the representation of the HSI. Based on these two priors, a unified optimization framework is proposed to learn both the dictionary and sparse representation from the measurements by alternatively optimizing two separate latent variable Bayes models. With the learned dictionary, the structured sparsity of HSIs can be well described by the reweighted Laplace prior. In addition, both the learned dictionary and sparse representation are robust to noise corruption in the measurements. Extensive experiments on three hyperspectral data sets demonstrate that the proposed method outperforms several state-of-the-art HCS methods in terms of the reconstruction accuracy achieved.

46 citations

Journal ArticleDOI
TL;DR: A new framework for model-guided adaptive recovery of compressive sensing (MARX) is proposed and it is shown how a 2-D piecewise autoregressive model can be integrated into the MARX framework to make CS recovery adaptive to spatially varying second order statistics of an image.
Abstract: In compressive sensing (CS), a challenge is to find a space in which the signal is sparse and, hence, faithfully recoverable. Since many natural signals such as images have locally varying statistics, the sparse space varies in time/spatial domain. As such, CS recovery should be conducted in locally adaptive signal-dependent spaces to counter the fact that the CS measurements are global and irrespective of signal structures. On the contrary, existing CS reconstruction methods use a fixed set of bases (e.g., wavelets, DCT, and gradient spaces) for the entirety of a signal. To rectify this problem, we propose a new framework for model-guided adaptive recovery of compressive sensing (MARX) and show how a 2-D piecewise autoregressive model can be integrated into the MARX framework to make CS recovery adaptive to spatially varying second order statistics of an image. In addition, MARX offers a mechanism of characterizing and exploiting structured sparsities of natural images, greatly restricting the CS solution space. Simulation results over a wide range of natural images show that the proposed MARX technique can improve the reconstruction quality of existing CS methods by 2-7 dB.

46 citations

Proceedings ArticleDOI
07 Nov 2009
TL;DR: This paper applies compressive sampling to reduce the sampling rate of images/video to exploit the intra- and inter-frame correlation to improve signal recovery algorithms.
Abstract: Compressive sampling is a novel framework that exploits sparsity of a signal in a transform domain to perform sampling below the Nyquist rate. In this paper, we apply compressive sampling to reduce the sampling rate of images/video. The key idea is to exploit the intra- and inter-frame correlation to improve signal recovery algorithms. The image is split into non-overlapping blocks of fixed size, which are independently compressively sampled exploiting sparsity of natural scenes in the Discrete Cosine Transform (DCT) domain. At the decoder, each block is recovered using useful information extracted from the recovery of a neighboring block. In the case of video, a previous frame is used to help recovery of consecutive frames. The iterative algorithm for signal recovery with side information that extends the standard orthogonal matching pursuit (OMP) algorithm is employed. Simulation results are given for Magnetic Resonance Imaging (MRI) and video sequences to illustrate advantages of the proposed solution compared to the case when side information is not used.

46 citations

Journal ArticleDOI
TL;DR: An uplink/downlink channel estimation method for millimeter wave (mmWave) massive multiple-input multiple-output (MIMO) systems is designed and the SBL that directly works on the continuous angle-delay parameter domain and avoids the grid mismatch problem is adopted.
Abstract: In this paper, we design an uplink/downlink channel estimation method for millimeter wave (mmWave) massive multiple-input multiple-output (MIMO) systems and investigate the impact of beam squint effect that accompanies large array configuration. Specifically, we adopt the off-grid sparse Bayesian learning (SBL) that directly works on the continuous angle-delay parameter domain and avoids the grid mismatch problem. Hence, the proposed method achieves good channel estimation accuracy and handles the wideband direction of arrival (DOA) estimation problem for mmWave massive MIMO communications, where beam squint effect was previously ignored by many existing literatures. The Cramer-Rao bound for unknown parameters is derived to make the proposed study complete. More importantly, a much simplified downlink channel estimation scheme is designed with the aid of angle-delay reciprocity , which significantly reduces training and feedback overhead. The simulation results are provided to demonstrate the superior performance of the proposed method over existing ones.

46 citations

References
More filters
Book
01 Jan 1983

34,729 citations

Book
D.L. Donoho1
01 Jan 2004
TL;DR: It is possible to design n=O(Nlog(m)) nonadaptive measurements allowing reconstruction with accuracy comparable to that attainable with direct knowledge of the N most important coefficients, and a good approximation to those N important coefficients is extracted from the n measurements by solving a linear program-Basis Pursuit in signal processing.
Abstract: Suppose x is an unknown vector in Ropfm (a digital image or signal); we plan to measure n general linear functionals of x and then reconstruct. If x is known to be compressible by transform coding with a known transform, and we reconstruct via the nonlinear procedure defined here, the number of measurements n can be dramatically smaller than the size m. Thus, certain natural classes of images with m pixels need only n=O(m1/4log5/2(m)) nonadaptive nonpixel samples for faithful recovery, as opposed to the usual m pixel samples. More specifically, suppose x has a sparse representation in some orthonormal basis (e.g., wavelet, Fourier) or tight frame (e.g., curvelet, Gabor)-so the coefficients belong to an lscrp ball for 0

18,609 citations

Journal ArticleDOI
TL;DR: Basis Pursuit (BP) is a principle for decomposing a signal into an "optimal" superposition of dictionary elements, where optimal means having the smallest l1 norm of coefficients among all such decompositions.
Abstract: The time-frequency and time-scale communities have recently developed a large number of overcomplete waveform dictionaries --- stationary wavelets, wavelet packets, cosine packets, chirplets, and warplets, to name a few. Decomposition into overcomplete systems is not unique, and several methods for decomposition have been proposed, including the method of frames (MOF), Matching pursuit (MP), and, for special dictionaries, the best orthogonal basis (BOB). Basis Pursuit (BP) is a principle for decomposing a signal into an "optimal" superposition of dictionary elements, where optimal means having the smallest l1 norm of coefficients among all such decompositions. We give examples exhibiting several advantages over MOF, MP, and BOB, including better sparsity and superresolution. BP has interesting relations to ideas in areas as diverse as ill-posed problems, in abstract harmonic analysis, total variation denoising, and multiscale edge denoising. BP in highly overcomplete dictionaries leads to large-scale optimization problems. With signals of length 8192 and a wavelet packet dictionary, one gets an equivalent linear program of size 8192 by 212,992. Such problems can be attacked successfully only because of recent advances in linear programming by interior-point methods. We obtain reasonable success with a primal-dual logarithmic barrier method and conjugate-gradient solver.

9,950 citations

Journal ArticleDOI
TL;DR: The authors introduce an algorithm, called matching pursuit, that decomposes any signal into a linear expansion of waveforms that are selected from a redundant dictionary of functions, chosen in order to best match the signal structures.
Abstract: The authors introduce an algorithm, called matching pursuit, that decomposes any signal into a linear expansion of waveforms that are selected from a redundant dictionary of functions. These waveforms are chosen in order to best match the signal structures. Matching pursuits are general procedures to compute adaptive signal representations. With a dictionary of Gabor functions a matching pursuit defines an adaptive time-frequency transform. They derive a signal energy distribution in the time-frequency plane, which does not include interference terms, unlike Wigner and Cohen class distributions. A matching pursuit isolates the signal structures that are coherent with respect to a given dictionary. An application to pattern extraction from noisy signals is described. They compare a matching pursuit decomposition with a signal expansion over an optimized wavepacket orthonormal basis, selected with the algorithm of Coifman and Wickerhauser see (IEEE Trans. Informat. Theory, vol. 38, Mar. 1992). >

9,380 citations

Journal ArticleDOI
TL;DR: A publicly available algorithm that requires only the same order of magnitude of computational effort as ordinary least squares applied to the full set of covariates is described.
Abstract: The purpose of model selection algorithms such as All Subsets, Forward Selection and Backward Elimination is to choose a linear model on the basis of the same set of data to which the model will be applied. Typically we have available a large collection of possible covariates from which we hope to select a parsimonious set for the efficient prediction of a response variable. Least Angle Regression (LARS), a new model selection algorithm, is a useful and less greedy version of traditional forward selection methods. Three main properties are derived: (1) A simple modification of the LARS algorithm implements the Lasso, an attractive version of ordinary least squares that constrains the sum of the absolute regression coefficients; the LARS modification calculates all possible Lasso estimates for a given problem, using an order of magnitude less computer time than previous methods. (2) A different LARS modification efficiently implements Forward Stagewise linear regression, another promising new model selection method; this connection explains the similar numerical results previously observed for the Lasso and Stagewise, and helps us understand the properties of both methods, which are seen as constrained versions of the simpler LARS algorithm. (3) A simple approximation for the degrees of freedom of a LARS estimate is available, from which we derive a Cp estimate of prediction error; this allows a principled choice among the range of possible LARS estimates. LARS and its variants are computationally efficient: the paper describes a publicly available algorithm that requires only the same order of magnitude of computational effort as ordinary least squares applied to the full set of covariates.

7,828 citations