scispace - formally typeset
Search or ask a question

Signal Recovery from Random Measurements Via Orthogonal Matching Pursuit: The Gaussian Case

TL;DR: In this paper, a greedy algorithm called Orthogonal Matching Pursuit (OMP) was proposed to recover a signal with m nonzero entries in dimension 1 given O(m n d) random linear measurements of that signal.
Abstract: This report demonstrates theoretically and empirically that a greedy algorithm called Orthogonal Matching Pursuit (OMP) can reliably recover a signal with m nonzero entries in dimension d given O(mln d) random linear measurements of that signal. This is a massive improvement over previous results, which require O(m2) measurements. The new results for OMP are comparable with recent results for another approach called Basis Pursuit (BP). In some settings, the OMP algorithm is faster and easier to implement, so it is an attractive alternative to BP for signal recovery problems.

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
TL;DR: In this article, a bipartite graph representation is proposed for sparse signal ensembles that quantifies the intra-and inter-signal dependences within and among the signals.
Abstract: In compressive sensing, a small collection of linear projections of a sparse signal contains enough information to permit signal recovery. Distributed compressive sensing extends this framework by defining ensemble sparsity models, allowing a correlated ensemble of sparse signals to be jointly recovered from a collection of separately acquired compressive measurements. In this paper, we introduce a framework for modeling sparse signal ensembles that quantifies the intra- and intersignal dependences within and among the signals. This framework is based on a novel bipartite graph representation that links the sparse signal coefficients with the measurements obtained for each signal. Using our framework, we provide fundamental bounds on the number of noiseless measurements that each sensor must collect to ensure that the signals are jointly recoverable.

55 citations

Journal ArticleDOI
TL;DR: An enhanced compressive sensing-based algorithm for the estimation of Taylor–Fourier coefficients in a multifrequency dynamic phasor model is proposed and characterized, allowing to combine multiple functions into a promising new-generation measurement, monitoring, and diagnostic device.
Abstract: To effectively manage the growing number of distributed energy resources, more extensive monitoring is needed at distribution network levels, where a challenging combination of constraints and cost-effectiveness requirements must be taken into account. In this paper, we consider the idea of combining accurate phasor measurement with the capability to analyze harmonic and interharmonic phasors. This creates the need to deal with multiple phasor components having different amplitudes, including interharmonics with unknown frequency locations. For this purpose, we propose an enhanced compressive sensing-based algorithm for the estimation of Taylor–Fourier coefficients in a multifrequency dynamic phasor model. We characterize its accuracy in harmonic and interharmonic phasor measurement and discuss reporting rate and latency performance. We show that very good accuracy can also be achieved as a standard phasor measurement unit, allowing to combine multiple functions into a promising new-generation measurement, monitoring, and diagnostic device.

55 citations

Journal ArticleDOI
TL;DR: Two simple methods are investigated for the block CS (BCS) with discrete cosine transform (DCT) based image representation for compression applications and are efficacious in reducing the dimension of the BCS-based image representation and/or improving the recovered image quality.

55 citations

Journal ArticleDOI
TL;DR: This paper addresses the energy-burden of acquisition and streaming of acoustic respiratory signal, and lessen it by applying the concept of compressed sensing (CS), and envision a mobile, personalized asthma monitoring system comprising of a wearable, energy-constrained acoustic sensor and smartphone.
Abstract: Current medical practice of long-term chronic respiratory diseases treatment lacks a convenient method of empowering the patients and caregivers to continuously quantitatively track the intensity of respiratory symptoms. Such is asthmatic wheezing, occurring in respiratory sounds. We envision a mobile, personalized asthma monitoring system comprising of a wearable, energy-constrained acoustic sensor and smartphone. In this paper, we address the energy-burden of acquisition and streaming of acoustic respiratory signal, and lessen it by applying the concept of compressed sensing (CS). First, we analyse the adherence of normal and pathologic respiratory sounds frequency representations (discrete Fourier transform and discrete cosine transform) to the sparse signal model. Given the pseudo-random non-uniform subsampling encoder implemented on MSP430 microcontroller, we review tradeoffs of accuracy and execution time of different CS algorithms, suitable for real-time respiratory spectrum recovery on smartphone. Working CS respiratory spectrum acquisition prototype is demonstrated, and evaluated. Prototype enables for real-time reconstruction of spectra dominated by approximately eight frequency components with more than 80% accuracy, on Android smartphone using Orthogonal Matching Pursuit algorithm, from only 25% signal samples (with respect to Nyquist rate) acquired and streamed by sensor at 8 kb/s.

54 citations

Journal ArticleDOI
TL;DR: It is shown that the data correlations for different tasks are taken into account more effectively by using the hierarchical model with a common prediction‐error precision parameter across all related tasks, which leads to a better learning performance.
Abstract: We focus on a Bayesian approach to learn sparse models by simultaneously utilizing multiple groups of measurements that are marked by a similar sparseness profile. Joint learning of sparse representations for multiple models has been mostly overlooked, although it is a useful tool for exploiting data redundancy by modeling informative relationships within groups of measurements. To this end, two hierarchical Bayesian models are introduced and associated algorithms are proposed for multitask sparse Bayesian learning (SBL). It is shown that the data correlations for different tasks are taken into account more effectively by using the hierarchical model with a common prediction‐error precision parameter across all related tasks, which leads to a better learning performance. Numerical experiments verify that exploiting common information among multiple related tasks leads to better performance, for both models that are highly and approximately sparse. Then, we examine two applications of multitask SBL in structural health monitoring: identifying structural stiffness losses and recovering missing data occurring during wireless transmission, which exploit information about relationships in the temporal and spatial domains, respectively. These illustrative examples demonstrate the potential of multitask SBL for solving a wide range of sparse approximation problems in science and technology.

54 citations

References
More filters
Book
01 Jan 1983

34,729 citations

Book
D.L. Donoho1
01 Jan 2004
TL;DR: It is possible to design n=O(Nlog(m)) nonadaptive measurements allowing reconstruction with accuracy comparable to that attainable with direct knowledge of the N most important coefficients, and a good approximation to those N important coefficients is extracted from the n measurements by solving a linear program-Basis Pursuit in signal processing.
Abstract: Suppose x is an unknown vector in Ropfm (a digital image or signal); we plan to measure n general linear functionals of x and then reconstruct. If x is known to be compressible by transform coding with a known transform, and we reconstruct via the nonlinear procedure defined here, the number of measurements n can be dramatically smaller than the size m. Thus, certain natural classes of images with m pixels need only n=O(m1/4log5/2(m)) nonadaptive nonpixel samples for faithful recovery, as opposed to the usual m pixel samples. More specifically, suppose x has a sparse representation in some orthonormal basis (e.g., wavelet, Fourier) or tight frame (e.g., curvelet, Gabor)-so the coefficients belong to an lscrp ball for 0

18,609 citations

Journal ArticleDOI
TL;DR: Basis Pursuit (BP) is a principle for decomposing a signal into an "optimal" superposition of dictionary elements, where optimal means having the smallest l1 norm of coefficients among all such decompositions.
Abstract: The time-frequency and time-scale communities have recently developed a large number of overcomplete waveform dictionaries --- stationary wavelets, wavelet packets, cosine packets, chirplets, and warplets, to name a few. Decomposition into overcomplete systems is not unique, and several methods for decomposition have been proposed, including the method of frames (MOF), Matching pursuit (MP), and, for special dictionaries, the best orthogonal basis (BOB). Basis Pursuit (BP) is a principle for decomposing a signal into an "optimal" superposition of dictionary elements, where optimal means having the smallest l1 norm of coefficients among all such decompositions. We give examples exhibiting several advantages over MOF, MP, and BOB, including better sparsity and superresolution. BP has interesting relations to ideas in areas as diverse as ill-posed problems, in abstract harmonic analysis, total variation denoising, and multiscale edge denoising. BP in highly overcomplete dictionaries leads to large-scale optimization problems. With signals of length 8192 and a wavelet packet dictionary, one gets an equivalent linear program of size 8192 by 212,992. Such problems can be attacked successfully only because of recent advances in linear programming by interior-point methods. We obtain reasonable success with a primal-dual logarithmic barrier method and conjugate-gradient solver.

9,950 citations

Journal ArticleDOI
TL;DR: The authors introduce an algorithm, called matching pursuit, that decomposes any signal into a linear expansion of waveforms that are selected from a redundant dictionary of functions, chosen in order to best match the signal structures.
Abstract: The authors introduce an algorithm, called matching pursuit, that decomposes any signal into a linear expansion of waveforms that are selected from a redundant dictionary of functions. These waveforms are chosen in order to best match the signal structures. Matching pursuits are general procedures to compute adaptive signal representations. With a dictionary of Gabor functions a matching pursuit defines an adaptive time-frequency transform. They derive a signal energy distribution in the time-frequency plane, which does not include interference terms, unlike Wigner and Cohen class distributions. A matching pursuit isolates the signal structures that are coherent with respect to a given dictionary. An application to pattern extraction from noisy signals is described. They compare a matching pursuit decomposition with a signal expansion over an optimized wavepacket orthonormal basis, selected with the algorithm of Coifman and Wickerhauser see (IEEE Trans. Informat. Theory, vol. 38, Mar. 1992). >

9,380 citations

Journal ArticleDOI
TL;DR: A publicly available algorithm that requires only the same order of magnitude of computational effort as ordinary least squares applied to the full set of covariates is described.
Abstract: The purpose of model selection algorithms such as All Subsets, Forward Selection and Backward Elimination is to choose a linear model on the basis of the same set of data to which the model will be applied. Typically we have available a large collection of possible covariates from which we hope to select a parsimonious set for the efficient prediction of a response variable. Least Angle Regression (LARS), a new model selection algorithm, is a useful and less greedy version of traditional forward selection methods. Three main properties are derived: (1) A simple modification of the LARS algorithm implements the Lasso, an attractive version of ordinary least squares that constrains the sum of the absolute regression coefficients; the LARS modification calculates all possible Lasso estimates for a given problem, using an order of magnitude less computer time than previous methods. (2) A different LARS modification efficiently implements Forward Stagewise linear regression, another promising new model selection method; this connection explains the similar numerical results previously observed for the Lasso and Stagewise, and helps us understand the properties of both methods, which are seen as constrained versions of the simpler LARS algorithm. (3) A simple approximation for the degrees of freedom of a LARS estimate is available, from which we derive a Cp estimate of prediction error; this allows a principled choice among the range of possible LARS estimates. LARS and its variants are computationally efficient: the paper describes a publicly available algorithm that requires only the same order of magnitude of computational effort as ordinary least squares applied to the full set of covariates.

7,828 citations