scispace - formally typeset
Search or ask a question

Signal Recovery from Random Measurements Via Orthogonal Matching Pursuit: The Gaussian Case

TL;DR: In this paper, a greedy algorithm called Orthogonal Matching Pursuit (OMP) was proposed to recover a signal with m nonzero entries in dimension 1 given O(m n d) random linear measurements of that signal.
Abstract: This report demonstrates theoretically and empirically that a greedy algorithm called Orthogonal Matching Pursuit (OMP) can reliably recover a signal with m nonzero entries in dimension d given O(mln d) random linear measurements of that signal. This is a massive improvement over previous results, which require O(m2) measurements. The new results for OMP are comparable with recent results for another approach called Basis Pursuit (BP). In some settings, the OMP algorithm is faster and easier to implement, so it is an attractive alternative to BP for signal recovery problems.

Content maybe subject to copyright    Report

Citations
More filters
Posted Content
TL;DR: This work presents for the first time a design and implementation of an Xampling-based hardware prototype that allows sampling of radar signals at rates much lower than Nyquist, and demonstrates by real-time analog experiments that the system is able to maintain reasonable recovery capabilities, while sampling radar signals that require sampling at a rate of about 30 MHz at a total rate of 1 MHz.
Abstract: Traditional radar sensing typically involves matched filtering between the received signal and the shape of the transmitted pulse. Under the confinement of classic sampling theorem this requires that the received signals must first be sampled at twice the baseband bandwidth, in order to avoid aliasing. The growing demands for target distinction capability and spatial resolution imply significant growth in the bandwidth of the transmitted pulse. Thus, correlation based radar systems require high sampling rates, and with the large amounts of data sampled also necessitate vast memory capacity. In addition, real-time processing of the data typically results in high power consumption. Recently, new approaches for radar sensing and detection were introduced, based on the Finite Rate of Innovation and Xampling frameworks. These techniques allow significant reduction in sampling rate, implying potential power savings, while maintaining the system's detection capabilities at high enough SNR. Here we present for the first time a design and implementation of a Xampling-based hardware prototype that allows sampling of radar signals at rates much lower than Nyquist. We demostrate by real-time analog experiments that our system is able to maintain reasonable detection capabilities, while sampling radar signals that require sampling at a rate of about 30MHz at a total rate of 1Mhz.

92 citations

Proceedings ArticleDOI
20 Jun 2010
TL;DR: It is obtained a theoretical result that signal compressed sensing using random binary matrices can be exactly reconstructed with high probability.
Abstract: Compressed sensing seeks to recover a sparse or compressible signal from a small number of linear and non-adaptive measurements While most of the studies so far focus on the prominent Gaussian random measurements, we investigate the performances of matrices with Bernoulli distribution As extensions of symmetric signs ensemble, random binary ensemble and semi-Hadamard ensemble are proposed as sensing matrices with simplex structures Based on some results of symmetric signs ensemble and the concept of compressed sensing matrices, we obtain a theoretical result that signal compressed sensing using random binary matrices can be exactly reconstructed with high probability In reconstruction processes, the fast and low-consumed orthogonal matching pursuit is adopted Numerical results show that such matrices perform equally well to the Gaussian matrices

92 citations

Journal ArticleDOI
TL;DR: This paper shows that, in the presence of noise, a relaxed RIC upper bound together with a relaxed requirement on the minimal signal entry magnitude suffices to achieve perfect support identification using OMP.
Abstract: A sufficient condition reported very recently for perfect recovery of a K-sparse vector via orthogonal matching pursuit (OMP) in K iterations (when there is no noise) is that the restricted isometry constant (RIC) of the sensing matrix satisfies δ K+1 <; (1/√(K) + 1) In the noisy case, this RIC upper bound along with a requirement on the minimal signal entry magnitude is known to guarantee exact support identification In this paper, we show that, in the presence of noise, a relaxed RIC upper bound δ K+1 <; (√(4K + 1) - 1/2K) together with a relaxed requirement on the minimal signal entry magnitude suffices to achieve perfect support identification using OMP In the noiseless case, our result asserts that such a relaxed RIC upper bound can ensure exact support recovery in K iterations: this narrows the gap between the so far best known bound δ K+1 <; (1/√(K( + 1)) and the ultimate performance guarantee δ K+1 = (1/(K)) Our approach relies on a newly established near orthogonality condition, characterized via the achievable angles between two orthogonal sparse vectors upon compression, and, thus, better exploits the knowledge about the geometry of the compressed space The proposed near orthogonality condition can be also exploited to derive less restricted sufficient conditions for signal reconstruction in two other compressive sensing problems, namely, compressive domain interference cancellation and support identification via the subspace pursuit algorithm

92 citations

Journal ArticleDOI
TL;DR: The literature on design and analysis of recursive algorithms for reconstructing a time sequence of sparse signals from compressive measurements, assuming the signals are assumed to be sparse in some transform domain or in some dictionary is reviewed.
Abstract: In this overview article, we review the literature on design and analysis of recursive algorithms for reconstructing a time sequence of sparse signals from compressive measurements. The signals are assumed to be sparse in some transform domain or in some dictionary. Their sparsity patterns can change with time, although, in many practical applications, the changes are gradual. An important class of applications where this problem occurs is dynamic projection imaging, e.g., dynamic magnetic resonance imaging (MRI) for real-time medical applications such as interventional radiology, or dynamic computed tomography.

92 citations

Journal ArticleDOI
TL;DR: This paper proposes an iterative matrix decomposition based hybrid beamforming (IMD-HBF) scheme for a single-user scenario, which accurately approximates the unconstrained beamforming solution, and shows that the knowledge of the angle of departure of the various channel paths is sufficient for the block diagonalization (BD) of the downlink mm-wave channel.
Abstract: Considering the dearth for spectrum in the congested microwave band, the next generation of cellular communication systems is envisaged to incorporate part of the millimeter wave (mm-wave) band. Hence, recently, there has been a significant interest in beamforming aided mm-wave systems. We consider a downlink multiuser mm-wave system employing a large number of antennas combined with fewer radio frequency chains both at the base station (BS) and at each of the user equipments (UEs). The BS and each of the UE are assumed to have a hybrid beamforming architecture, where a set of analog phase shifters is followed by digital precoding/combining blocks. In this paper, we propose an iterative matrix decomposition based hybrid beamforming (IMD-HBF) scheme for a single-user scenario, which accurately approximates the unconstrained beamforming solution, we show that the knowledge of the angle of departure (AoD) of the various channel paths is sufficient for the block diagonalization (BD) of the downlink mm-wave channel and hence for achieving interference free channels for each of the UEs, we propose a novel subspace projection based AoD aided BD (SP-AoD-BD) that achieves significantly better performance than the conventional BD, while still only requiring the knowledge of the AoD of various channel paths, and we use IMD-HBF in order to employ SP-AoD-BD in the hybrid beamforming architecture and study its performance with respect to the unconstrained system. We demonstrate using simulation results that the proposed IMD-HBF gives the same spectral efficiency as that of the unconstrained system in the single user scenario. Furthermore, we study the achievable sum rate of the users, when employing SP-AoD-BD with the aid of IMD-HBF and show that the loss in the performance with respect to the unconstrained system as well as the existing schemes is negligible, provided that the number of users is not excessive.

91 citations

References
More filters
Book
01 Jan 1983

34,729 citations

Book
D.L. Donoho1
01 Jan 2004
TL;DR: It is possible to design n=O(Nlog(m)) nonadaptive measurements allowing reconstruction with accuracy comparable to that attainable with direct knowledge of the N most important coefficients, and a good approximation to those N important coefficients is extracted from the n measurements by solving a linear program-Basis Pursuit in signal processing.
Abstract: Suppose x is an unknown vector in Ropfm (a digital image or signal); we plan to measure n general linear functionals of x and then reconstruct. If x is known to be compressible by transform coding with a known transform, and we reconstruct via the nonlinear procedure defined here, the number of measurements n can be dramatically smaller than the size m. Thus, certain natural classes of images with m pixels need only n=O(m1/4log5/2(m)) nonadaptive nonpixel samples for faithful recovery, as opposed to the usual m pixel samples. More specifically, suppose x has a sparse representation in some orthonormal basis (e.g., wavelet, Fourier) or tight frame (e.g., curvelet, Gabor)-so the coefficients belong to an lscrp ball for 0

18,609 citations

Journal ArticleDOI
TL;DR: Basis Pursuit (BP) is a principle for decomposing a signal into an "optimal" superposition of dictionary elements, where optimal means having the smallest l1 norm of coefficients among all such decompositions.
Abstract: The time-frequency and time-scale communities have recently developed a large number of overcomplete waveform dictionaries --- stationary wavelets, wavelet packets, cosine packets, chirplets, and warplets, to name a few. Decomposition into overcomplete systems is not unique, and several methods for decomposition have been proposed, including the method of frames (MOF), Matching pursuit (MP), and, for special dictionaries, the best orthogonal basis (BOB). Basis Pursuit (BP) is a principle for decomposing a signal into an "optimal" superposition of dictionary elements, where optimal means having the smallest l1 norm of coefficients among all such decompositions. We give examples exhibiting several advantages over MOF, MP, and BOB, including better sparsity and superresolution. BP has interesting relations to ideas in areas as diverse as ill-posed problems, in abstract harmonic analysis, total variation denoising, and multiscale edge denoising. BP in highly overcomplete dictionaries leads to large-scale optimization problems. With signals of length 8192 and a wavelet packet dictionary, one gets an equivalent linear program of size 8192 by 212,992. Such problems can be attacked successfully only because of recent advances in linear programming by interior-point methods. We obtain reasonable success with a primal-dual logarithmic barrier method and conjugate-gradient solver.

9,950 citations

Journal ArticleDOI
TL;DR: The authors introduce an algorithm, called matching pursuit, that decomposes any signal into a linear expansion of waveforms that are selected from a redundant dictionary of functions, chosen in order to best match the signal structures.
Abstract: The authors introduce an algorithm, called matching pursuit, that decomposes any signal into a linear expansion of waveforms that are selected from a redundant dictionary of functions. These waveforms are chosen in order to best match the signal structures. Matching pursuits are general procedures to compute adaptive signal representations. With a dictionary of Gabor functions a matching pursuit defines an adaptive time-frequency transform. They derive a signal energy distribution in the time-frequency plane, which does not include interference terms, unlike Wigner and Cohen class distributions. A matching pursuit isolates the signal structures that are coherent with respect to a given dictionary. An application to pattern extraction from noisy signals is described. They compare a matching pursuit decomposition with a signal expansion over an optimized wavepacket orthonormal basis, selected with the algorithm of Coifman and Wickerhauser see (IEEE Trans. Informat. Theory, vol. 38, Mar. 1992). >

9,380 citations

Journal ArticleDOI
TL;DR: A publicly available algorithm that requires only the same order of magnitude of computational effort as ordinary least squares applied to the full set of covariates is described.
Abstract: The purpose of model selection algorithms such as All Subsets, Forward Selection and Backward Elimination is to choose a linear model on the basis of the same set of data to which the model will be applied. Typically we have available a large collection of possible covariates from which we hope to select a parsimonious set for the efficient prediction of a response variable. Least Angle Regression (LARS), a new model selection algorithm, is a useful and less greedy version of traditional forward selection methods. Three main properties are derived: (1) A simple modification of the LARS algorithm implements the Lasso, an attractive version of ordinary least squares that constrains the sum of the absolute regression coefficients; the LARS modification calculates all possible Lasso estimates for a given problem, using an order of magnitude less computer time than previous methods. (2) A different LARS modification efficiently implements Forward Stagewise linear regression, another promising new model selection method; this connection explains the similar numerical results previously observed for the Lasso and Stagewise, and helps us understand the properties of both methods, which are seen as constrained versions of the simpler LARS algorithm. (3) A simple approximation for the degrees of freedom of a LARS estimate is available, from which we derive a Cp estimate of prediction error; this allows a principled choice among the range of possible LARS estimates. LARS and its variants are computationally efficient: the paper describes a publicly available algorithm that requires only the same order of magnitude of computational effort as ordinary least squares applied to the full set of covariates.

7,828 citations