scispace - formally typeset
Search or ask a question

Signal Recovery from Random Measurements Via Orthogonal Matching Pursuit: The Gaussian Case

TL;DR: In this paper, a greedy algorithm called Orthogonal Matching Pursuit (OMP) was proposed to recover a signal with m nonzero entries in dimension 1 given O(m n d) random linear measurements of that signal.
Abstract: This report demonstrates theoretically and empirically that a greedy algorithm called Orthogonal Matching Pursuit (OMP) can reliably recover a signal with m nonzero entries in dimension d given O(mln d) random linear measurements of that signal. This is a massive improvement over previous results, which require O(m2) measurements. The new results for OMP are comparable with recent results for another approach called Basis Pursuit (BP). In some settings, the OMP algorithm is faster and easier to implement, so it is an attractive alternative to BP for signal recovery problems.

Content maybe subject to copyright    Report

Citations
More filters
Posted Content
TL;DR: Two algorithms, fast iterative shrinkage threshold algorithm (FISTA) and orthogonal matching pursuit (OMP) are extended to solve the problem of recovering an unknown sparse matrix X from the matrix sketch Y = AX B^T without employing the Kronecker product.
Abstract: In this paper, we consider the problem of recovering an unknown sparse matrix X from the matrix sketch Y = AX B^T. The dimension of Y is less than that of X, and A and B are known matrices. This problem can be solved using standard compressive sensing (CS) theory after converting it to vector form using the Kronecker operation. In this case, the measurement matrix assumes a Kronecker product structure. However, as the matrix dimension increases the associated computational complexity makes its use prohibitive. We extend two algorithms, fast iterative shrinkage threshold algorithm (FISTA) and orthogonal matching pursuit (OMP) to solve this problem in matrix form without employing the Kronecker product. While both FISTA and OMP with matrix inputs are shown to be equivalent in performance to their vector counterparts with the Kronecker product, solving them in matrix form is shown to be computationally more efficient. We show that the computational gain achieved by FISTA with matrix inputs over its vector form is more significant compared to that achieved by OMP.

48 citations

Posted Content
TL;DR: This paper introduces a novel channel estimation strategy using compressive sampling matching pursuit (CoSaMP) algorithm, which will combine the greedy algorithm with the convex program method to exploit the sparsity of multi-path channel (MPC).
Abstract: -Wideband wireless channel is a time dispersive channel and becomes strongly frequency-selective However, in most cases, the channel is composed of a few dominant taps and a large part of taps is approximately zero or zero To exploit the sparsity of multi-path channel (MPC), two methods have been proposed They are, namely, greedy algorithm and convex program Greedy algorithm is easy to be implemented but not stable; on the other hand, the convex program method is stable but difficult to be implemented as practical channel estimation problems In this paper, we introduce a novel channel estimation strategy using compressive sampling matching pursuit (CoSaMP) algorithm which was proposed in [1] This algorithm will combine the greedy algorithm with the convex program method The effectiveness of the proposed algorithm will be confirmed through comparisons with the existing methods 0 I I NTRODUCTION Coherent detection in wideband mobile communication systems often requires accurate channel state information at a receiver The study of channel estimation for the purposes of channel equalization has a long history In many studies, densely distributed channel impulse response was often assumed Under this assumption, it is necessary to use a long training sequence In addition, the linear channel estimation methods, such as least square (LS) algorithm, always lead to bandwidth inefficiency It is an interesting study to develop more bandwidth efficient method to acquire channel information Recently, the compressive sensi ng (CS) has been developed as a new technique It is regarded as an efficient signal acquisition framework for signals characterized as sparse or compressible in time or frequency domain One application of the CS technique is in channel estimation If the channel impulse response follows sparse distribution, we can apply the CS technique As a result, the training sequence length can be shortened compared with the linear estimation methods Recent measurements show that the sparse or approximate sparse distribution assumption is reasonable [2, 3] In other words, the wireless channels in real propagation environments are characterized as sparse or sparse clustered; these sparse or clustered channels are frequently termed as a sparse multi-path channel (SMPC) An example of SMPC impulse response channel is shown in Fig1 Recently, the study on SMPC has drawn a lot of attentions and concerning results can be found in literature [4-6] Correspondingl y, sparse channel estimation technique has also received considerable interest for its advantages in high bit rate transmissions over multipath channel [7]

48 citations

Proceedings ArticleDOI
22 May 2011
TL;DR: The effect of look ahead strategy is shown to provide a significant improvement in performance in the recovery performance of the existing orthogonal matching pursuit (OMP) algorithm.
Abstract: For compressive sensing, we endeavor to improve the recovery performance of the existing orthogonal matching pursuit (OMP) algorithm. To achieve a better estimate of the underlying support set progressively through iterations, we use a look ahead strategy. The choice of an atom in the current iteration is performed by checking its effect on the future iterations (look ahead strategy). Through experimental evaluations, the effect of look ahead strategy is shown to provide a significant improvement in performance.

48 citations

Journal ArticleDOI
TL;DR: The proposed solution fully exploits the multiaccess nature of the wireless medium and addresses the half-duplex constraint at the fundamental level and in a network consisting of Poisson distributed nodes, numerical results demonstrate that the proposed scheme often achieves several times the rate of slotted ALOHA and CSMA with the same packet error rate.
Abstract: A novel solution is proposed to undertake a frequent task in wireless networks, which is to let all nodes broadcast information to and receive information from their respective one-hop neighboring nodes. The contribution in this paper is twofold. First, as each neighbor selects one message-bearing codeword from its unique codebook for transmission, it is shown that decoding their messages based on a superposition of those codewords through the multiaccess channel is fundamentally a problem of compressed sensing. In the case where each message is designed to consist of a small number of bits, an iterative algorithm based on belief propagation is developed for efficient decoding. Second, to satisfy the half-duplex constraint, each codeword consists of randomly distributed on-slots and off-slots. A node transmits during its on-slots and listens to its neighbors only through its own off-slots. Over one frame interval, each node broadcasts a message to its neighbors and simultaneously receives the superposition of neighbors' signals through its own off-slots and then decodes all messages. The proposed solution fully exploits the multiaccess nature of the wireless medium and addresses the half-duplex constraint at the fundamental level. In a network consisting of Poisson distributed nodes, numerical results demonstrate that the proposed scheme often achieves several times the rate of slotted ALOHA and CSMA with the same packet error rate.

48 citations

Proceedings ArticleDOI
01 Oct 2010
TL;DR: In this paper, the feasibility of compressive sensing (CS) technique for raw RF signals reconstruction in medical ultrasound was addressed. But the authors did not consider the use of directional wave atoms, which shows good properties for sparsely representing warped oscillatory patterns.
Abstract: We address in this paper the feasibility of the recent compressive sensing (CS) technique for raw RF signals reconstruction in medical ultrasound. Successful CS reconstruction implies selecting a basis where the signal to be recovered has a sparse expansion. RF signals represent a specific challenge, because their oscillatory nature does not easily lend itself to a sparse representation. In that perspective, we propose to use the recently introduced directional wave atoms [1], which shows good properties for sparsely representing warped oscillatory patterns. Experiments were performed on simulated RF data from a cyst numerical phantom. CS reconstruction was applied to subsampled RF data, by removing 50% to 90% of the original samples. Reconstruction using wave atoms were compared to reconstruction obtained from Fourier and Daubechies wavelets. The obtained accuracies were in the range [0.4–4.0].10−3, [0.2–2.8].10−3 and [0.1–1.7].10−3, using wavelets, Fourier and wave atoms respectively, showing the superiority of the wave atoms representation, and the feasibility of CS for the reconstruction of US RF data.

47 citations

References
More filters
Book
01 Jan 1983

34,729 citations

Book
D.L. Donoho1
01 Jan 2004
TL;DR: It is possible to design n=O(Nlog(m)) nonadaptive measurements allowing reconstruction with accuracy comparable to that attainable with direct knowledge of the N most important coefficients, and a good approximation to those N important coefficients is extracted from the n measurements by solving a linear program-Basis Pursuit in signal processing.
Abstract: Suppose x is an unknown vector in Ropfm (a digital image or signal); we plan to measure n general linear functionals of x and then reconstruct. If x is known to be compressible by transform coding with a known transform, and we reconstruct via the nonlinear procedure defined here, the number of measurements n can be dramatically smaller than the size m. Thus, certain natural classes of images with m pixels need only n=O(m1/4log5/2(m)) nonadaptive nonpixel samples for faithful recovery, as opposed to the usual m pixel samples. More specifically, suppose x has a sparse representation in some orthonormal basis (e.g., wavelet, Fourier) or tight frame (e.g., curvelet, Gabor)-so the coefficients belong to an lscrp ball for 0

18,609 citations

Journal ArticleDOI
TL;DR: Basis Pursuit (BP) is a principle for decomposing a signal into an "optimal" superposition of dictionary elements, where optimal means having the smallest l1 norm of coefficients among all such decompositions.
Abstract: The time-frequency and time-scale communities have recently developed a large number of overcomplete waveform dictionaries --- stationary wavelets, wavelet packets, cosine packets, chirplets, and warplets, to name a few. Decomposition into overcomplete systems is not unique, and several methods for decomposition have been proposed, including the method of frames (MOF), Matching pursuit (MP), and, for special dictionaries, the best orthogonal basis (BOB). Basis Pursuit (BP) is a principle for decomposing a signal into an "optimal" superposition of dictionary elements, where optimal means having the smallest l1 norm of coefficients among all such decompositions. We give examples exhibiting several advantages over MOF, MP, and BOB, including better sparsity and superresolution. BP has interesting relations to ideas in areas as diverse as ill-posed problems, in abstract harmonic analysis, total variation denoising, and multiscale edge denoising. BP in highly overcomplete dictionaries leads to large-scale optimization problems. With signals of length 8192 and a wavelet packet dictionary, one gets an equivalent linear program of size 8192 by 212,992. Such problems can be attacked successfully only because of recent advances in linear programming by interior-point methods. We obtain reasonable success with a primal-dual logarithmic barrier method and conjugate-gradient solver.

9,950 citations

Journal ArticleDOI
TL;DR: The authors introduce an algorithm, called matching pursuit, that decomposes any signal into a linear expansion of waveforms that are selected from a redundant dictionary of functions, chosen in order to best match the signal structures.
Abstract: The authors introduce an algorithm, called matching pursuit, that decomposes any signal into a linear expansion of waveforms that are selected from a redundant dictionary of functions. These waveforms are chosen in order to best match the signal structures. Matching pursuits are general procedures to compute adaptive signal representations. With a dictionary of Gabor functions a matching pursuit defines an adaptive time-frequency transform. They derive a signal energy distribution in the time-frequency plane, which does not include interference terms, unlike Wigner and Cohen class distributions. A matching pursuit isolates the signal structures that are coherent with respect to a given dictionary. An application to pattern extraction from noisy signals is described. They compare a matching pursuit decomposition with a signal expansion over an optimized wavepacket orthonormal basis, selected with the algorithm of Coifman and Wickerhauser see (IEEE Trans. Informat. Theory, vol. 38, Mar. 1992). >

9,380 citations

Journal ArticleDOI
TL;DR: A publicly available algorithm that requires only the same order of magnitude of computational effort as ordinary least squares applied to the full set of covariates is described.
Abstract: The purpose of model selection algorithms such as All Subsets, Forward Selection and Backward Elimination is to choose a linear model on the basis of the same set of data to which the model will be applied. Typically we have available a large collection of possible covariates from which we hope to select a parsimonious set for the efficient prediction of a response variable. Least Angle Regression (LARS), a new model selection algorithm, is a useful and less greedy version of traditional forward selection methods. Three main properties are derived: (1) A simple modification of the LARS algorithm implements the Lasso, an attractive version of ordinary least squares that constrains the sum of the absolute regression coefficients; the LARS modification calculates all possible Lasso estimates for a given problem, using an order of magnitude less computer time than previous methods. (2) A different LARS modification efficiently implements Forward Stagewise linear regression, another promising new model selection method; this connection explains the similar numerical results previously observed for the Lasso and Stagewise, and helps us understand the properties of both methods, which are seen as constrained versions of the simpler LARS algorithm. (3) A simple approximation for the degrees of freedom of a LARS estimate is available, from which we derive a Cp estimate of prediction error; this allows a principled choice among the range of possible LARS estimates. LARS and its variants are computationally efficient: the paper describes a publicly available algorithm that requires only the same order of magnitude of computational effort as ordinary least squares applied to the full set of covariates.

7,828 citations