scispace - formally typeset
Search or ask a question

Signal Recovery from Random Measurements Via Orthogonal Matching Pursuit: The Gaussian Case

TL;DR: In this paper, a greedy algorithm called Orthogonal Matching Pursuit (OMP) was proposed to recover a signal with m nonzero entries in dimension 1 given O(m n d) random linear measurements of that signal.
Abstract: This report demonstrates theoretically and empirically that a greedy algorithm called Orthogonal Matching Pursuit (OMP) can reliably recover a signal with m nonzero entries in dimension d given O(mln d) random linear measurements of that signal. This is a massive improvement over previous results, which require O(m2) measurements. The new results for OMP are comparable with recent results for another approach called Basis Pursuit (BP). In some settings, the OMP algorithm is faster and easier to implement, so it is an attractive alternative to BP for signal recovery problems.

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
TL;DR: This review discusses the current implementations of MRF and their use in a clinical setting and highlights areas of need that must be addressed before MRF can be fully adopted into the clinic and makes recommendations to the MRF community on standardization and validation strategies.
Abstract: Magnetic resonance fingerprinting (MRF) is a powerful quantitative MRI technique capable of acquiring multiple property maps simultaneously in a short timeframe. The MRF framework has been adapted to a wide variety of clinical applications, but faces challenges in technical development, and to date has only demonstrated repeatability and reproducibility in small studies. In this review, we discuss the current implementations of MRF and their use in a clinical setting. Based on this analysis, we highlight areas of need that must be addressed before MRF can be fully adopted into the clinic and make recommendations to the MRF community on standardization and validation strategies of MRF techniques. Level of Evidence: 2 Technical Efficacy: Stage 2 J. Magn. Reson. Imaging 2020;51:675-692.

57 citations

Journal ArticleDOI
TL;DR: A detailed review for the state-of-the-art of secure CS according to different types of random measurement matrices such as Gaussian matrix, circulant matrix, and other special random matrices establishes theoretical foundations for applications in secure wireless communications.
Abstract: Compressive sensing (CS) has become a popular signal processing technique and has extensive applications in numerous fields such as wireless communications, image processing, magnetic resonance imaging, remote sensing imaging, and anology to information conversion, since it can realize simultaneous sampling and compression. In the information security field, secure CS has received much attention due to the fact that CS can be regarded as a cryptosystem to attain simultaneous sampling, compression and encryption when maintaining the secret measurement matrix. Considering that there are increasing works focusing on secure wireless communications based on CS in recent years, we produce a detailed review for the state-of-the-art in this paper. To be specific, the survey proceeds with two phases. The first phase reviews the security aspects of CS according to different types of random measurement matrices such as Gaussian matrix, circulant matrix, and other special random matrices, which establishes theoretical foundations for applications in secure wireless communications. The second phase reviews the applications of secure CS depending on communication scenarios such as wireless wiretap channel, wireless sensor network, Internet of Things, crowdsensing, smart grid, and wireless body area networks. Finally, some concluding remarks are given.

57 citations

Proceedings ArticleDOI
16 Mar 2009
TL;DR: A piecewise stationary autoregressive model is integrated into the recovery process for CS-coded images, and is able to increase the reconstruction quality by $2 \thicksim 7$dB over existing methods.
Abstract: For the new signal acquisition methodology of compressive sensing (CS) a challenge is to find a space in which the signal is sparse and hence recoverable faithfully. Given the nonstationarity of many natural signals such as images, the sparse space is varying in time or spatial domain. As such, CS recovery should be conducted in locally adaptive, signal-dependent spaces to counter the fact that the CS measurements are global and irrespective of signal structures. On the contrary existing CS reconstruction methods use a fixed set of bases (e.g., wavelets, DCT, and gradient spaces) for the entirety of a signal. To rectify this problem we propose a new model-based framework to facilitate the use of adaptive bases in CS recovery. In a case study we integrate a piecewise stationary autoregressive model into the recovery process for CS-coded images, and are able to increase the reconstruction quality by $2 \thicksim 7$dB over existing methods. The new CS recovery framework can readily incorporate prior knowledge to boost reconstruction quality.

56 citations

Journal ArticleDOI
TL;DR: An information-theoretic framework for analyzing the performance limits of support recovery is obtained and it is shown that the proposed methodology can deal with a variety of models of sparse signal recovery, hence demonstrating its potential as an effective analytical tool.
Abstract: In this paper, we consider the problem of exact support recovery of sparse signals via noisy linear measurements The main focus is finding the sufficient and necessary condition on the number of measurements for support recovery to be reliable By drawing an analogy between the problem of support recovery and the problem of channel coding over the Gaussian multiple-access channel (MAC), and exploiting mathematical tools developed for the latter problem, we obtain an information-theoretic framework for analyzing the performance limits of support recovery Specifically, when the number of nonzero entries of the sparse signal is held fixed, the exact asymptotics on the number of measurements sufficient and necessary for support recovery is characterized In addition, we show that the proposed methodology can deal with a variety of models of sparse signal recovery, hence demonstrating its potential as an effective analytical tool

56 citations

Journal ArticleDOI
TL;DR: A novel perturbed orthogonal matching pursuit (POMP) algorithm that performs controlled perturbation of selected support vectors to decrease the Orthogonal residual at each iteration is presented.
Abstract: Compressive Sensing theory details how a sparsely represented signal in a known basis can be reconstructed with an underdetermined linear measurement model. However, in reality there is a mismatch between the assumed and the actual bases due to factors such as discretization of the parameter space defining basis components, sampling jitter in A/D conversion, and model errors. Due to this mismatch, a signal may not be sparse in the assumed basis, which causes significant performance degradation in sparse reconstruction algorithms. To eliminate the mismatch problem, this paper presents a novel perturbed orthogonal matching pursuit (POMP) algorithm that performs controlled perturbation of selected support vectors to decrease the orthogonal residual at each iteration. Based on detailed mathematical analysis, conditions for successful reconstruction are derived. Simulations show that robust results with much smaller reconstruction errors in the case of perturbed bases can be obtained as compared to standard sparse reconstruction techniques.

56 citations

References
More filters
Book
01 Jan 1983

34,729 citations

Book
D.L. Donoho1
01 Jan 2004
TL;DR: It is possible to design n=O(Nlog(m)) nonadaptive measurements allowing reconstruction with accuracy comparable to that attainable with direct knowledge of the N most important coefficients, and a good approximation to those N important coefficients is extracted from the n measurements by solving a linear program-Basis Pursuit in signal processing.
Abstract: Suppose x is an unknown vector in Ropfm (a digital image or signal); we plan to measure n general linear functionals of x and then reconstruct. If x is known to be compressible by transform coding with a known transform, and we reconstruct via the nonlinear procedure defined here, the number of measurements n can be dramatically smaller than the size m. Thus, certain natural classes of images with m pixels need only n=O(m1/4log5/2(m)) nonadaptive nonpixel samples for faithful recovery, as opposed to the usual m pixel samples. More specifically, suppose x has a sparse representation in some orthonormal basis (e.g., wavelet, Fourier) or tight frame (e.g., curvelet, Gabor)-so the coefficients belong to an lscrp ball for 0

18,609 citations

Journal ArticleDOI
TL;DR: Basis Pursuit (BP) is a principle for decomposing a signal into an "optimal" superposition of dictionary elements, where optimal means having the smallest l1 norm of coefficients among all such decompositions.
Abstract: The time-frequency and time-scale communities have recently developed a large number of overcomplete waveform dictionaries --- stationary wavelets, wavelet packets, cosine packets, chirplets, and warplets, to name a few. Decomposition into overcomplete systems is not unique, and several methods for decomposition have been proposed, including the method of frames (MOF), Matching pursuit (MP), and, for special dictionaries, the best orthogonal basis (BOB). Basis Pursuit (BP) is a principle for decomposing a signal into an "optimal" superposition of dictionary elements, where optimal means having the smallest l1 norm of coefficients among all such decompositions. We give examples exhibiting several advantages over MOF, MP, and BOB, including better sparsity and superresolution. BP has interesting relations to ideas in areas as diverse as ill-posed problems, in abstract harmonic analysis, total variation denoising, and multiscale edge denoising. BP in highly overcomplete dictionaries leads to large-scale optimization problems. With signals of length 8192 and a wavelet packet dictionary, one gets an equivalent linear program of size 8192 by 212,992. Such problems can be attacked successfully only because of recent advances in linear programming by interior-point methods. We obtain reasonable success with a primal-dual logarithmic barrier method and conjugate-gradient solver.

9,950 citations

Journal ArticleDOI
TL;DR: The authors introduce an algorithm, called matching pursuit, that decomposes any signal into a linear expansion of waveforms that are selected from a redundant dictionary of functions, chosen in order to best match the signal structures.
Abstract: The authors introduce an algorithm, called matching pursuit, that decomposes any signal into a linear expansion of waveforms that are selected from a redundant dictionary of functions. These waveforms are chosen in order to best match the signal structures. Matching pursuits are general procedures to compute adaptive signal representations. With a dictionary of Gabor functions a matching pursuit defines an adaptive time-frequency transform. They derive a signal energy distribution in the time-frequency plane, which does not include interference terms, unlike Wigner and Cohen class distributions. A matching pursuit isolates the signal structures that are coherent with respect to a given dictionary. An application to pattern extraction from noisy signals is described. They compare a matching pursuit decomposition with a signal expansion over an optimized wavepacket orthonormal basis, selected with the algorithm of Coifman and Wickerhauser see (IEEE Trans. Informat. Theory, vol. 38, Mar. 1992). >

9,380 citations

Journal ArticleDOI
TL;DR: A publicly available algorithm that requires only the same order of magnitude of computational effort as ordinary least squares applied to the full set of covariates is described.
Abstract: The purpose of model selection algorithms such as All Subsets, Forward Selection and Backward Elimination is to choose a linear model on the basis of the same set of data to which the model will be applied. Typically we have available a large collection of possible covariates from which we hope to select a parsimonious set for the efficient prediction of a response variable. Least Angle Regression (LARS), a new model selection algorithm, is a useful and less greedy version of traditional forward selection methods. Three main properties are derived: (1) A simple modification of the LARS algorithm implements the Lasso, an attractive version of ordinary least squares that constrains the sum of the absolute regression coefficients; the LARS modification calculates all possible Lasso estimates for a given problem, using an order of magnitude less computer time than previous methods. (2) A different LARS modification efficiently implements Forward Stagewise linear regression, another promising new model selection method; this connection explains the similar numerical results previously observed for the Lasso and Stagewise, and helps us understand the properties of both methods, which are seen as constrained versions of the simpler LARS algorithm. (3) A simple approximation for the degrees of freedom of a LARS estimate is available, from which we derive a Cp estimate of prediction error; this allows a principled choice among the range of possible LARS estimates. LARS and its variants are computationally efficient: the paper describes a publicly available algorithm that requires only the same order of magnitude of computational effort as ordinary least squares applied to the full set of covariates.

7,828 citations