scispace - formally typeset
Search or ask a question

Signal Recovery from Random Measurements Via Orthogonal Matching Pursuit: The Gaussian Case

TL;DR: In this paper, a greedy algorithm called Orthogonal Matching Pursuit (OMP) was proposed to recover a signal with m nonzero entries in dimension 1 given O(m n d) random linear measurements of that signal.
Abstract: This report demonstrates theoretically and empirically that a greedy algorithm called Orthogonal Matching Pursuit (OMP) can reliably recover a signal with m nonzero entries in dimension d given O(mln d) random linear measurements of that signal. This is a massive improvement over previous results, which require O(m2) measurements. The new results for OMP are comparable with recent results for another approach called Basis Pursuit (BP). In some settings, the OMP algorithm is faster and easier to implement, so it is an attractive alternative to BP for signal recovery problems.

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
TL;DR: The theory of compressive sampling, also known as compressed sensing or CS, is surveyed, a novel sensing/sampling paradigm that goes against the common wisdom in data acquisition.
Abstract: Conventional approaches to sampling signals or images follow Shannon's theorem: the sampling rate must be at least twice the maximum frequency present in the signal (Nyquist rate). In the field of data conversion, standard analog-to-digital converter (ADC) technology implements the usual quantized Shannon representation - the signal is uniformly sampled at or above the Nyquist rate. This article surveys the theory of compressive sampling, also known as compressed sensing or CS, a novel sensing/sampling paradigm that goes against the common wisdom in data acquisition. CS theory asserts that one can recover certain signals and images from far fewer samples or measurements than traditional methods use.

9,686 citations

Journal ArticleDOI
TL;DR: It is demonstrated theoretically and empirically that a greedy algorithm called orthogonal matching pursuit (OMP) can reliably recover a signal with m nonzero entries in dimension d given O(m ln d) random linear measurements of that signal.
Abstract: This paper demonstrates theoretically and empirically that a greedy algorithm called orthogonal matching pursuit (OMP) can reliably recover a signal with m nonzero entries in dimension d given O(m ln d) random linear measurements of that signal. This is a massive improvement over previous results, which require O(m2) measurements. The new results for OMP are comparable with recent results for another approach called basis pursuit (BP). In some settings, the OMP algorithm is faster and easier to implement, so it is an attractive alternative to BP for signal recovery problems.

8,604 citations

Journal ArticleDOI
TL;DR: A new iterative recovery algorithm called CoSaMP is described that delivers the same guarantees as the best optimization-based approaches and offers rigorous bounds on computational cost and storage.

3,970 citations

Journal ArticleDOI
TL;DR: This paper considers transmit precoding and receiver combining in mmWave systems with large antenna arrays and develops algorithms that accurately approximate optimal unconstrained precoders and combiners such that they can be implemented in low-cost RF hardware.
Abstract: Millimeter wave (mmWave) signals experience orders-of-magnitude more pathloss than the microwave signals currently used in most wireless applications and all cellular systems. MmWave systems must therefore leverage large antenna arrays, made possible by the decrease in wavelength, to combat pathloss with beamforming gain. Beamforming with multiple data streams, known as precoding, can be used to further improve mmWave spectral efficiency. Both beamforming and precoding are done digitally at baseband in traditional multi-antenna systems. The high cost and power consumption of mixed-signal devices in mmWave systems, however, make analog processing in the RF domain more attractive. This hardware limitation restricts the feasible set of precoders and combiners that can be applied by practical mmWave transceivers. In this paper, we consider transmit precoding and receiver combining in mmWave systems with large antenna arrays. We exploit the spatial structure of mmWave channels to formulate the precoding/combining problem as a sparse reconstruction problem. Using the principle of basis pursuit, we develop algorithms that accurately approximate optimal unconstrained precoders and combiners such that they can be implemented in low-cost RF hardware. We present numerical results on the performance of the proposed algorithms and show that they allow mmWave systems to approach their unconstrained performance limits, even when transceiver hardware constraints are considered.

3,146 citations

Journal ArticleDOI
TL;DR: This extended abstract describes a recent algorithm, called, CoSaMP, that accomplishes the data recovery task and was the first known method to offer near-optimal guarantees on resource usage.
Abstract: Compressive sampling (CoSa) is a new paradigm for developing data sampling technologies It is based on the principle that many types of vector-space data are compressible, which is a term of art in mathematical signal processing The key ideas are that randomized dimension reduction preserves the information in a compressible signal and that it is possible to develop hardware devices that implement this dimension reduction efficiently The main computational challenge in CoSa is to reconstruct a compressible signal from the reduced representation acquired by the sampling device This extended abstract describes a recent algorithm, called, CoSaMP, that accomplishes the data recovery task It was the first known method to offer near-optimal guarantees on resource usage

2,928 citations

References
More filters
Journal ArticleDOI
TL;DR: The best known result is the O(1 + o(1))n 221−n bound of as mentioned in this paper, which is a considerable improvement on the best known O( 1/ √ n.
Abstract: We report some progress on the old problem of estimating the probability, Pn, that a random n× n ± 1 matrix is singular: Theorem. There is a positive constant ε for which Pn < (1− ε)n. This is a considerable improvement on the best previous bound, Pn = O(1/ √ n), given by Komlós in 1977, but still falls short of the often-conjectured asymptotical formula Pn = (1 + o(1))n 221−n. The proof combines ideas from combinatorial number theory, Fourier analysis and combinatorics, and some probabilistic constructions. A key ingredient, based on a Fourier-analytic idea of Halász, is an inequality (Theorem 2) relating the probability that a ∈ Rn is orthogonal to a random ε ∈ {±1}n to the corresponding probability when ε is random from {−1, 0, 1}n with Pr(εi = −1) = Pr(εi = 1) = p and εi’s chosen independently.

247 citations

Journal ArticleDOI
TL;DR: An approach to error-correcting codes from the viewpoint of geometric functional analysis (asymptotic convex geometry) is developed, which belongs to a common ground of coding theory, signal processing, combinatorial geometry, and geometricfunctional analysis.
Abstract: The results of this paper can be stated in three equivalent ways—in terms of the sparse recovery problem, the error-correction problem, and the problem of existence of certain extremal (neighborly) polytopes. Error-correcting codes are used in modern technology to protect information from errors. Information is formed by finite words over some alphabet F. The encoder transforms an n-letter word x into an m-letter word y with m>n . The decoder must be able to recover x correctly when up to r letters of y are corrupted in any way. Such an encoder-decoder pair is called an (n, m, r)-error-correcting code. Development of algorithmically efficient error correcting codes has attracted attention of engineers, computer scientists, and applied mathematicians for the past five decades. Known constructions involve deep algebraic and combinatorial methods, see [34, 35, 36]. This paper develops an approach to error-correcting codes from the viewpoint of geometric functional analysis (asymptotic convex geometry). It thus belongs to a common ground of coding theory, signal processing, combinatorial geometry, and geometric functional analysis. Our argument, outlined in Section 3, may be of independent interest in geometric functional analysis. Our main focus will be on words over the alphabet F = R or C. In applications, these words may be formed of the coefficients of some signal (such as image or audio)

193 citations

Journal ArticleDOI
TL;DR: In this article, the authors investigate the problem of reconstructing sparse multivariate trigonometric polynomials from few randomly taken samples by Basis Pursuit and greedy algorithms such as Orthogonal Matching Pursuit (OMP) and Thresholding.
Abstract: We investigate the problem of reconstructing sparse multivariate trigonometric polynomials from few randomly taken samples by Basis Pursuit and greedy algorithms such as Orthogonal Matching Pursuit (OMP) and Thresholding. While recovery by Basis Pursuit has recently been studied by several authors, we provide theoretical results on the success probability of reconstruction via Thresholding and OMP for both a continuous and a discrete probability model for the sampling points. We present numerical experiments, which indicate that usually Basis Pursuit is significantly slower than greedy algorithms, while the recovery rates are very similar.

182 citations

Journal ArticleDOI
TL;DR: In this article, it was shown that recovery by a BP variant is stable under perturbation of the samples values by noise and a similar partial result for OMP is provided.
Abstract: Recently, it has been observed that a sparse trigonometric polynomial, i.e., having only a small number of nonzero coefficients, can be reconstructed exactly from a small number of random samples using basis pursuit (BP) or orthogonal matching pursuit (OMP). In this paper, it is shown that recovery by a BP variant is stable under perturbation of the samples values by noise. A similar partial result for OMP is provided. For BP, in addition, the stability result is extended to (nonsparse) trigonometric polynomials that can be well approximated by sparse ones. The theoretical findings are illustrated by numerical experiments.

128 citations

Book ChapterDOI
01 Jan 2001
TL;DR: The Loomis-Whitney inequality as discussed by the authors is a generalization of the isoperimetric inequality, which was used in Gagliardo's proof of the Sobolev embedding theorem.
Abstract: This chapter describes three topics that lie at the intersection of functional analysis, harmonic analysis, probability theory, and convex geometry. The classical isoperimetric inequality states that among the bodies of a given volume in R n , the Euclidean balls have least surface area. This principle seems to have been recognized, at least in two dimensions, by the ancients and by the end of the past century, there were a number of proofs that worked in arbitrary dimension. The simplex has the largest volume ratio among the convex bodies of a given dimension, while among the symmetric bodies the cube is extremal. The Loomis–Whitney inequality already looks a bit like an isoperimetric inequality, because it estimates the volume of K in terms of the volumes of 1-codimensional sets derived from K . In fact, a simple generalization of the Loomis–Whitney inequality is the main technical tool in Gagliardo's proof of the Sobolev embedding theorem. The simplex has the largest volume ratio of any convex body and that the cube has the largest of any symmetric body. For a convex body whose maximal ellipsoid is known, these volume ratio estimates automatically provide upper bounds for the volume of the body.

79 citations