scispace - formally typeset
Search or ask a question

Signal Recovery from Random Measurements Via Orthogonal Matching Pursuit: The Gaussian Case

TL;DR: In this paper, a greedy algorithm called Orthogonal Matching Pursuit (OMP) was proposed to recover a signal with m nonzero entries in dimension 1 given O(m n d) random linear measurements of that signal.
Abstract: This report demonstrates theoretically and empirically that a greedy algorithm called Orthogonal Matching Pursuit (OMP) can reliably recover a signal with m nonzero entries in dimension d given O(mln d) random linear measurements of that signal. This is a massive improvement over previous results, which require O(m2) measurements. The new results for OMP are comparable with recent results for another approach called Basis Pursuit (BP). In some settings, the OMP algorithm is faster and easier to implement, so it is an attractive alternative to BP for signal recovery problems.

Content maybe subject to copyright    Report

Citations
More filters
01 Jul 2008
TL;DR: The purpose is to introduce an elementary treatment of the CS theory free of RIP and easily accessible to a broad audience, and to extend the current recoverability and stability results so that prior knowledge can be utilized to enhance recovery via ‘1-minimization.
Abstract: Compressive (or compressed) sensing (CS) is an emerging methodology in computational signal processing that has recently attracted intensive research activities. At present, the basic CS theory includes recoverability and stability: the former quanties the central fact that a sparse signal of length n can be exactly recovered from far fewer than n measurements via ‘1minimization or other recovery techniques, while the latter species the stability of a recovery technique in the presence of measurement errors and inexact sparsity. So far, most analyses in CS rely heavily on the Restricted Isometry Property (RIP) for matrices. In this paper, we present an alternative, non-RIP analysis for CS via ‘1-minimization. Our purpose is three-fold: (a) to introduce an elementary treatment of the CS theory free of RIP and easily accessible to a broad audience; (b) to extend the current recoverability and stability results so that prior knowledge can be utilized to enhance recovery via ‘1-minimization; and (c) to substantiate a property called uniform recoverability of ‘1-minimization; that is, for almost all random measurement matrices recoverability is asymptotically identical. With the aid of two classic results, the non-RIP approach enables us to quickly derive from scratch all basic results for the extended theory.

43 citations

Journal ArticleDOI
TL;DR: This paper introduces a dedicated beam training strategy which sends the training beams separately to a specific high mobility user without changing the periodicity of the conventional beam training, and proposes the optimal training beam selection strategy which finds the best beamforming vectors yielding the lowest channel estimation error based on the target user's probabilistic channel information.
Abstract: In this paper, we propose an efficient beam training technique for millimeter-wave (mmWave) communications. Beam training should be performed frequently when some mobile users are under high mobility to ensure the accurate acquisition of the channel state information. To reduce the resource overhead caused by frequent beam training, we introduce a dedicated beam training strategy which sends the training beams separately to a specific high mobility user (called a target user) without changing the periodicity of the conventional beam training. The dedicated beam training requires a small amount of resources because the training beams can be optimized for the target user. To satisfy the performance requirement with a low training overhead, we propose the optimal training beam selection strategy which finds the best beamforming vectors yielding the lowest channel estimation error based on the target user’s probabilistic channel information. This dedicated beam training is combined with the greedy channel estimation algorithm that accounts for sparse characteristics and temporal dynamics of the target user’s channel. Our numerical evaluation demonstrates that the proposed scheme can maintain good channel estimation performance with significantly less training overhead compared to the conventional beam training protocols.

43 citations

Journal ArticleDOI
TL;DR: Theoretical analyses and simulations verify that the proposed QuadCS is a valid system to acquire the I and Q components in the received radar signals and prove that the QuadCS system satisfies the restricted isometry property with overwhelming probability.
Abstract: Quadrature sampling has been widely applied in coherent radar systems to extract in-phase and quadrature ( $I$ and $Q$ ) components in the received radar signal. However, the sampling is inefficient because the received signal contains only a small number of significant target signals. This paper incorporates the compressive sampling (CS) theory into the design of the quadrature sampling system, and develops a quadrature compressive sampling (QuadCS) system to acquire the $I$ and $Q$ components with low sampling rate. The QuadCS system first randomly projects the received signal into a compressive bandpass signal and then utilizes the quadrature sampling to output compressive $I$ and $Q$ components. The compressive outputs are used to reconstruct the $I$ and $Q$ components. To understand the system performance, we establish the frequency domain representation of the QuadCS system. With the waveform-matched dictionary, we prove that the QuadCS system satisfies the restricted isometry property with overwhelming probability. For $K$ target signals in the observation interval $T$ , simulations show that the QuadCS requires just ${\cal O}(K\log(BT/K))$ samples to stably reconstruct the signal, where $B$ is the signal bandwidth. The reconstructed signal-to-noise ratio decreases by 3 dB for every octave increase in the target number $K$ and increases by 3 dB for every octave increase in the compressive bandwidth. Theoretical analyses and simulations verify that the proposed QuadCS is a valid system to acquire the $I$ and $Q$ components in the received radar signals.

42 citations

Dissertation
10 Jul 2008
TL;DR: In this article, the authors extend their gratitude towards their advisor, Amjad Luna, whose guidance has been instrumental in every possible way, Thankyou! I would also like to thank all my teachers (and later colleagues) at UET Lahore, without whom I would not have been here.
Abstract: To My parents with utmost respect, Haiqa and Dayan with best dreams. iii ACKNOWLEDGEMENTS First and foremost, I would like to thank my advisor, Justin Romberg, for all the inspiration, motivation and guidance. Without his invaluable insight and constant mentoring this thesis would have not been possible. I will always be grateful to him for introducing me to this research area with so many new and exciting problems and helping me all along the way. I cannot thank him enough for the long hours of discussion on any problem I brought to him; anytime, anywhere. I also want to thank him for reading all the drafts of this thesis, his suggestions helped a lot in improving its content and presentation. I am grateful to him for being so friendly, patient and kind to me all the time (not to mention all the squash games he beats me in, ruthlessly!). I want to thank my thesis committee members Prof. James McClellan and Prof. Russell Mersereau for their encouraging remarks about this work. I would like to thank my teachers here at Georgia Tech., all of whom influenced me a lot. I would like to thank Profs. William Green and Michael Westdickenberg who taught me about mathematical analysis. I would also like to thank Profs. John Barry and Faramarz Fekri for their exciting classes in my first semester here. I would like to extend my gratitude towards my undergraduate advisor, Amjad Luna, whose guidance has been instrumental in every possible way, Thankyou! I would also like to thank all my teachers (and later colleagues) at UET Lahore, without whom I would not have been here. Many thanks to all my friends who made my time here a lot more enjoyable than I had anticipated. First of all, I must thank Farasat Munir and Mohammad Omer for being a huge support to me whenever I needed them. I cherish their friendship iv a lot. I especially want to thank Omer for his help and consideration at all those times when I have nobody else to talk to. I also want to thank my roommate Umair Bin Altaf (" patti ") for all the great time so far, for forcing me to learn L A T E X (along with many other things) and carefully reading the initial drafts of my thesis. William Mantzel, with whom I discuss almost all of my research problems …

42 citations

Journal ArticleDOI
TL;DR: A DOA recovery technique that relies only on magnitude measurements is proposed that is inspired by phase retrieval for applications in other fields and demonstrates good DOA estimation performance.
Abstract: We consider the classical Direction of arrival (DOA) estimation problem in the presence of random sensor phase errors are present at each sensor. To eliminate the effect of these phase errors, we propose a DOA recovery technique that relies only on magnitude measurements. This approach is inspired by phase retrieval for applications in other fields. Ambiguities typically associated with phase retrieval methods are resolved by introducing reference targets with known DOA. The DOA estimation problem is formulated as a nonlinear optimization in a sparse framework, and is solved by the recently proposed GESPAR algorithm modified to accommodate multiple snapshots. Numerical results demonstrate good DOA estimation performance. For example, the probability of error in locating a single target within 2 degrees is less than 0.1 for ${\rm SNR} \geq 15~\hbox{dB}$ and one snapshot, and negligible for ${\rm SNR} \geq 10~\hbox{dB}$ and five snapshots.

42 citations

References
More filters
Book
01 Jan 1983

34,729 citations

Book
D.L. Donoho1
01 Jan 2004
TL;DR: It is possible to design n=O(Nlog(m)) nonadaptive measurements allowing reconstruction with accuracy comparable to that attainable with direct knowledge of the N most important coefficients, and a good approximation to those N important coefficients is extracted from the n measurements by solving a linear program-Basis Pursuit in signal processing.
Abstract: Suppose x is an unknown vector in Ropfm (a digital image or signal); we plan to measure n general linear functionals of x and then reconstruct. If x is known to be compressible by transform coding with a known transform, and we reconstruct via the nonlinear procedure defined here, the number of measurements n can be dramatically smaller than the size m. Thus, certain natural classes of images with m pixels need only n=O(m1/4log5/2(m)) nonadaptive nonpixel samples for faithful recovery, as opposed to the usual m pixel samples. More specifically, suppose x has a sparse representation in some orthonormal basis (e.g., wavelet, Fourier) or tight frame (e.g., curvelet, Gabor)-so the coefficients belong to an lscrp ball for 0

18,609 citations

Journal ArticleDOI
TL;DR: Basis Pursuit (BP) is a principle for decomposing a signal into an "optimal" superposition of dictionary elements, where optimal means having the smallest l1 norm of coefficients among all such decompositions.
Abstract: The time-frequency and time-scale communities have recently developed a large number of overcomplete waveform dictionaries --- stationary wavelets, wavelet packets, cosine packets, chirplets, and warplets, to name a few. Decomposition into overcomplete systems is not unique, and several methods for decomposition have been proposed, including the method of frames (MOF), Matching pursuit (MP), and, for special dictionaries, the best orthogonal basis (BOB). Basis Pursuit (BP) is a principle for decomposing a signal into an "optimal" superposition of dictionary elements, where optimal means having the smallest l1 norm of coefficients among all such decompositions. We give examples exhibiting several advantages over MOF, MP, and BOB, including better sparsity and superresolution. BP has interesting relations to ideas in areas as diverse as ill-posed problems, in abstract harmonic analysis, total variation denoising, and multiscale edge denoising. BP in highly overcomplete dictionaries leads to large-scale optimization problems. With signals of length 8192 and a wavelet packet dictionary, one gets an equivalent linear program of size 8192 by 212,992. Such problems can be attacked successfully only because of recent advances in linear programming by interior-point methods. We obtain reasonable success with a primal-dual logarithmic barrier method and conjugate-gradient solver.

9,950 citations

Journal ArticleDOI
TL;DR: The authors introduce an algorithm, called matching pursuit, that decomposes any signal into a linear expansion of waveforms that are selected from a redundant dictionary of functions, chosen in order to best match the signal structures.
Abstract: The authors introduce an algorithm, called matching pursuit, that decomposes any signal into a linear expansion of waveforms that are selected from a redundant dictionary of functions. These waveforms are chosen in order to best match the signal structures. Matching pursuits are general procedures to compute adaptive signal representations. With a dictionary of Gabor functions a matching pursuit defines an adaptive time-frequency transform. They derive a signal energy distribution in the time-frequency plane, which does not include interference terms, unlike Wigner and Cohen class distributions. A matching pursuit isolates the signal structures that are coherent with respect to a given dictionary. An application to pattern extraction from noisy signals is described. They compare a matching pursuit decomposition with a signal expansion over an optimized wavepacket orthonormal basis, selected with the algorithm of Coifman and Wickerhauser see (IEEE Trans. Informat. Theory, vol. 38, Mar. 1992). >

9,380 citations

Journal ArticleDOI
TL;DR: A publicly available algorithm that requires only the same order of magnitude of computational effort as ordinary least squares applied to the full set of covariates is described.
Abstract: The purpose of model selection algorithms such as All Subsets, Forward Selection and Backward Elimination is to choose a linear model on the basis of the same set of data to which the model will be applied. Typically we have available a large collection of possible covariates from which we hope to select a parsimonious set for the efficient prediction of a response variable. Least Angle Regression (LARS), a new model selection algorithm, is a useful and less greedy version of traditional forward selection methods. Three main properties are derived: (1) A simple modification of the LARS algorithm implements the Lasso, an attractive version of ordinary least squares that constrains the sum of the absolute regression coefficients; the LARS modification calculates all possible Lasso estimates for a given problem, using an order of magnitude less computer time than previous methods. (2) A different LARS modification efficiently implements Forward Stagewise linear regression, another promising new model selection method; this connection explains the similar numerical results previously observed for the Lasso and Stagewise, and helps us understand the properties of both methods, which are seen as constrained versions of the simpler LARS algorithm. (3) A simple approximation for the degrees of freedom of a LARS estimate is available, from which we derive a Cp estimate of prediction error; this allows a principled choice among the range of possible LARS estimates. LARS and its variants are computationally efficient: the paper describes a publicly available algorithm that requires only the same order of magnitude of computational effort as ordinary least squares applied to the full set of covariates.

7,828 citations