scispace - formally typeset
Search or ask a question

Signal Recovery from Random Measurements Via Orthogonal Matching Pursuit: The Gaussian Case

TL;DR: In this paper, a greedy algorithm called Orthogonal Matching Pursuit (OMP) was proposed to recover a signal with m nonzero entries in dimension 1 given O(m n d) random linear measurements of that signal.
Abstract: This report demonstrates theoretically and empirically that a greedy algorithm called Orthogonal Matching Pursuit (OMP) can reliably recover a signal with m nonzero entries in dimension d given O(mln d) random linear measurements of that signal. This is a massive improvement over previous results, which require O(m2) measurements. The new results for OMP are comparable with recent results for another approach called Basis Pursuit (BP). In some settings, the OMP algorithm is faster and easier to implement, so it is an attractive alternative to BP for signal recovery problems.

Content maybe subject to copyright    Report

Citations
More filters
Proceedings ArticleDOI
01 Dec 2011
TL;DR: This paper uses the compressive sensing framework to establish secure physical layer communication over a Wyner wiretap channel to exploit channel asymmetry so that a message, encoded as a sparse vector, is decodable with high probability at the legitimate receiver while it is impossible to decode it at the eavesdropper.
Abstract: This paper uses the compressive sensing framework to establish secure physical layer communication over a Wyner wiretap channel. The idea, at its core, is simple — the paper shows that compressive sensing can exploit channel asymmetry so that a message, encoded as a sparse vector, is decodable with high probability at the legitimate receiver while it is impossible to decode it with high probability at the eavesdropper.

60 citations

Journal ArticleDOI
TL;DR: This paper considers a video system where acquisition is carried out in the form of direct compressive sampling with no other form of sophisticated encoding, and shows that effective implicit motion estimation and decoding can be carried out at the receiver or decoder side via sparsity-aware recovery.
Abstract: Compressed sensing is the theory and practice of sub-Nyquist sampling of sparse signals of interest. Perfect reconstruction may then be possible with much fewer than the Nyquist required number of data. In this paper, in particular, we consider a video system where acquisition is carried out in the form of direct compressive sampling (CS) with no other form of sophisticated encoding. Therefore, the burden of quality video sequence reconstruction falls solely on the receiver side. We show that effective implicit motion estimation and decoding can be carried out at the receiver or decoder side via sparsity-aware recovery. The receiver performs sliding-window interframe decoding that adaptively estimates Karhunen-Loeve bases from adjacent previously reconstructed frames to enhance the sparse representation of each video frame block, such that the overall reconstruction quality is improved at any given fixed CS rate. Experimental results included in this paper illustrate the presented developments.

60 citations

Journal ArticleDOI
TL;DR: This paper introduces a basic problem in compressive sensing and some disadvantage of the random sensing matrices, and some recent results on construction of the deterministic sensingMatrices are discussed.
Abstract: Compressive sensing is a sampling method which provides a new approach to efficient signal compression and recovery by exploiting the fact that a sparse signal can be suitably reconstructed from very few measurements. One of the most concerns in compressive sensing is the construction of the sensing matrices. While random sensing matrices have been widely studied, only a few deterministic sensing matrices have been considered. These matrices are highly desirable on structure which allows fast implementation with reduced storage requirements. In this paper, a survey of deterministic sensing matrices for compressive sensing is presented. We introduce a basic problem in compressive sensing and some disadvantage of the random sensing matrices. Some recent results on construction of the deterministic sensing matrices are discussed.

60 citations

Proceedings ArticleDOI
09 Sep 2013
TL;DR: It is shown that the sparse signal reconstruction methods applied to the time-lag domain improve the TFR over the direct application of Fourier transform to the IAF and that the use of signal-adaptive kernels provides superior performance compared to data-independent kernels when missing data are present.
Abstract: In this paper, we examine the time-frequency representation (TFR) and sparse reconstruction of non-stationary signals in the presence of missing data samples. These samples lend themselves to missing entries in the instantaneous auto-correlation function (IAF) which, in turn, induce artifacts in the time-frequency distribution and ambiguity function. The artifacts are additive noise-like and, as such, can be mitigated by using proper time-frequency kernels. We show that the sparse signal reconstruction methods applied to the time-lag domain improve the TFR over the direct application of Fourier transform to the IAF. Additionally, the paper demonstrates that the use of signal-adaptive kernels provides superior performance compared to data-independent kernels when missing data are present.

60 citations

Journal ArticleDOI
TL;DR: A rapid interferer detector that uses compressed sampling (CS) with a quadrature analog-to-information converter (QAIC), a blind sub-Nyquist sampling approach, that is two orders of magnitude more energy efficient than traditional Nyquist-rate architectures and one order of magnitude faster than existing low-pass CS methods.
Abstract: We introduce a rapid interferer detector that uses compressed sampling (CS) with a quadrature analog-to-information converter (QAIC). By exploiting bandpass CS, a blind sub-Nyquist sampling approach, the QAIC offers an energy efficient and rapid interferer detection over a wide instantaneous bandwidth. The QAIC front end is implemented in 65 nm CMOS in 0.43 mm 2 and consumes 81 mW from a 1.1 V supply. It senses a frequency span of 1 GHz ranging from 2.7 to 3.7 GHz (PCAST Band) with a resolution bandwidth of 20 MHz in 4.4 µs, 50 times faster than traditional sweeping spectrum scanners. Rapid interferer detector with the bandpass QAIC is two orders of magnitude more energy efficient than traditional Nyquist-rate architectures and one order of magnitude more energy efficient than existing low-pass CS methods. Thanks to CS, the aggregate sampling rate of the QAIC interferer detector is compressed by 6.3 $\times$ compared to traditional Nyquist-rate architectures for the same instantaneous bandwidth.

59 citations

References
More filters
Book
01 Jan 1983

34,729 citations

Book
D.L. Donoho1
01 Jan 2004
TL;DR: It is possible to design n=O(Nlog(m)) nonadaptive measurements allowing reconstruction with accuracy comparable to that attainable with direct knowledge of the N most important coefficients, and a good approximation to those N important coefficients is extracted from the n measurements by solving a linear program-Basis Pursuit in signal processing.
Abstract: Suppose x is an unknown vector in Ropfm (a digital image or signal); we plan to measure n general linear functionals of x and then reconstruct. If x is known to be compressible by transform coding with a known transform, and we reconstruct via the nonlinear procedure defined here, the number of measurements n can be dramatically smaller than the size m. Thus, certain natural classes of images with m pixels need only n=O(m1/4log5/2(m)) nonadaptive nonpixel samples for faithful recovery, as opposed to the usual m pixel samples. More specifically, suppose x has a sparse representation in some orthonormal basis (e.g., wavelet, Fourier) or tight frame (e.g., curvelet, Gabor)-so the coefficients belong to an lscrp ball for 0

18,609 citations

Journal ArticleDOI
TL;DR: Basis Pursuit (BP) is a principle for decomposing a signal into an "optimal" superposition of dictionary elements, where optimal means having the smallest l1 norm of coefficients among all such decompositions.
Abstract: The time-frequency and time-scale communities have recently developed a large number of overcomplete waveform dictionaries --- stationary wavelets, wavelet packets, cosine packets, chirplets, and warplets, to name a few. Decomposition into overcomplete systems is not unique, and several methods for decomposition have been proposed, including the method of frames (MOF), Matching pursuit (MP), and, for special dictionaries, the best orthogonal basis (BOB). Basis Pursuit (BP) is a principle for decomposing a signal into an "optimal" superposition of dictionary elements, where optimal means having the smallest l1 norm of coefficients among all such decompositions. We give examples exhibiting several advantages over MOF, MP, and BOB, including better sparsity and superresolution. BP has interesting relations to ideas in areas as diverse as ill-posed problems, in abstract harmonic analysis, total variation denoising, and multiscale edge denoising. BP in highly overcomplete dictionaries leads to large-scale optimization problems. With signals of length 8192 and a wavelet packet dictionary, one gets an equivalent linear program of size 8192 by 212,992. Such problems can be attacked successfully only because of recent advances in linear programming by interior-point methods. We obtain reasonable success with a primal-dual logarithmic barrier method and conjugate-gradient solver.

9,950 citations

Journal ArticleDOI
TL;DR: The authors introduce an algorithm, called matching pursuit, that decomposes any signal into a linear expansion of waveforms that are selected from a redundant dictionary of functions, chosen in order to best match the signal structures.
Abstract: The authors introduce an algorithm, called matching pursuit, that decomposes any signal into a linear expansion of waveforms that are selected from a redundant dictionary of functions. These waveforms are chosen in order to best match the signal structures. Matching pursuits are general procedures to compute adaptive signal representations. With a dictionary of Gabor functions a matching pursuit defines an adaptive time-frequency transform. They derive a signal energy distribution in the time-frequency plane, which does not include interference terms, unlike Wigner and Cohen class distributions. A matching pursuit isolates the signal structures that are coherent with respect to a given dictionary. An application to pattern extraction from noisy signals is described. They compare a matching pursuit decomposition with a signal expansion over an optimized wavepacket orthonormal basis, selected with the algorithm of Coifman and Wickerhauser see (IEEE Trans. Informat. Theory, vol. 38, Mar. 1992). >

9,380 citations

Journal ArticleDOI
TL;DR: A publicly available algorithm that requires only the same order of magnitude of computational effort as ordinary least squares applied to the full set of covariates is described.
Abstract: The purpose of model selection algorithms such as All Subsets, Forward Selection and Backward Elimination is to choose a linear model on the basis of the same set of data to which the model will be applied. Typically we have available a large collection of possible covariates from which we hope to select a parsimonious set for the efficient prediction of a response variable. Least Angle Regression (LARS), a new model selection algorithm, is a useful and less greedy version of traditional forward selection methods. Three main properties are derived: (1) A simple modification of the LARS algorithm implements the Lasso, an attractive version of ordinary least squares that constrains the sum of the absolute regression coefficients; the LARS modification calculates all possible Lasso estimates for a given problem, using an order of magnitude less computer time than previous methods. (2) A different LARS modification efficiently implements Forward Stagewise linear regression, another promising new model selection method; this connection explains the similar numerical results previously observed for the Lasso and Stagewise, and helps us understand the properties of both methods, which are seen as constrained versions of the simpler LARS algorithm. (3) A simple approximation for the degrees of freedom of a LARS estimate is available, from which we derive a Cp estimate of prediction error; this allows a principled choice among the range of possible LARS estimates. LARS and its variants are computationally efficient: the paper describes a publicly available algorithm that requires only the same order of magnitude of computational effort as ordinary least squares applied to the full set of covariates.

7,828 citations