scispace - formally typeset
Search or ask a question

Signal Recovery from Random Measurements Via Orthogonal Matching Pursuit: The Gaussian Case

TL;DR: In this paper, a greedy algorithm called Orthogonal Matching Pursuit (OMP) was proposed to recover a signal with m nonzero entries in dimension 1 given O(m n d) random linear measurements of that signal.
Abstract: This report demonstrates theoretically and empirically that a greedy algorithm called Orthogonal Matching Pursuit (OMP) can reliably recover a signal with m nonzero entries in dimension d given O(mln d) random linear measurements of that signal. This is a massive improvement over previous results, which require O(m2) measurements. The new results for OMP are comparable with recent results for another approach called Basis Pursuit (BP). In some settings, the OMP algorithm is faster and easier to implement, so it is an attractive alternative to BP for signal recovery problems.

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
TL;DR: The experiments confirm that the proposed framework may be used for compression of abdominal f-ECG and to obtain real-time information of the fHR, providing a suitable solution for real time, very low-power f- ECG monitoring.
Abstract: Objective: Analysis of fetal electrocardiogram (f-ECG) waveforms as well as fetal heart-rate (fHR) evaluation provide important information about the condition of the fetus during pregnancy. A continuous monitoring of f-ECG, for example using the technologies already applied for adults ECG tele-monitor-ing (e.g., Wireless Body Sensor Networks (WBSNs)), may increase early detection of fetal arrhythmias. In this study, we propose a novel framework, based on compressive sensing (CS) theory, for the compression and joint detection/classification of mother and fetal heart beats. Methods : Our scheme is based on the sparse representation of the components derived from independent component analysis (ICA), which we propose to apply directly in the compressed domain. Detection and classification is based on the activated atoms in a specifically designed reconstruction dictionary. Results : Validation of the proposed compression and detection framework has been done on two publicly available datasets, showing promising results (sensitivity S = 92.5 $\%$ , P += 92 $\%$ , F 1 = 92.2 $\%$ for the Silesia dataset and S = 78 $\%$ , P += 77 $\%$ , F 1 = 77.5 $\%$ for the Challenge dataset A, with average reconstruction quality PRD = 8.5 $\%$ and PRD = 7.5 $\%$ , respectively). Conclusion : The experiments confirm that the proposed framework may be used for compression of abdominal f-ECG and to obtain real-time information of the fHR, providing a suitable solution for real time, very low-power f-ECG monitoring. Significance : To the authors’ knowledge, this is the first time that a framework for the low-power CS compression of fetal abdominal ECG is proposed combined with a beat detector for an fHR estimation.

79 citations

Journal Article
TL;DR: This article generalizes HTP from compressed sensing to a generic problem setup of sparsity-constrained convex optimization and demonstrates the superiority of the method to the state-of-the-art greedy selection methods in sparse linear regression, sparse logistic regression and sparse precision matrix estimation problems.
Abstract: Hard Thresholding Pursuit (HTP) is an iterative greedy selection procedure for finding sparse solutions of underdetermined linear systems. This method has been shown to have strong theoretical guarantee and impressive numerical performance. In this article, we generalize HTP from compressed sensing to a generic problem setup of sparsity-constrained convex optimization. The proposed algorithm iterates between a standard gradient descent step and a hard-thresholding step with or without debiasing. We analyze the parameter estimation and sparsity recovery performance of the proposed method. Extensive numerical results confirm our theoretical predictions and demonstrate the superiority of our method to the state-of-the-art greedy selection methods in sparse linear regression, sparse logistic regression and sparse precision matrix estimation problems.

79 citations

Journal ArticleDOI
TL;DR: A novel cluster sparsity field based HSI reconstruction framework which explicitly models both the intrinsic correlation between measurements within the spectrum for a particular pixel, and the similarity between pixels due to the spatial structure of the HSI, thus combating the effects of noise corruption or undersampling.
Abstract: Hyperspectral images (HSIs) have significant advantages over more traditional image types for a variety of computer vision applications dues to the extra information available. The practical reality of capturing and transmitting HSIs however, means that they often exhibit large amounts of noise, or are undersampled to reduce the data volume. Methods for combating such image corruption are thus critical to many HSIs applications. Here we devise a novel cluster sparsity field (CSF) based HSI reconstruction framework which explicitly models both the intrinsic correlation between measurements within the spectrum for a particular pixel, and the similarity between pixels due to the spatial structure of the HSI. These two priors have been shown to be effective previously, but have been always considered separately. By dividing pixels of the HSI into a group of spatial clusters on the basis of spectrum characteristics, we define CSF, a Markov random field based prior. In CSF, a structured sparsity potential models the correlation between measurements within each spectrum, and a graph structure potential models the similarity between pixels in each spatial cluster. Then, we integrate the CSF prior learning and image reconstruction into a unified variational framework for optimization, which makes the CSF prior image-specific, and robust to noise. It also results in more accurate image reconstruction compared with existing HSI reconstruction methods, thus combating the effects of noise corruption or undersampling. Extensive experiments on HSI denoising and HSI compressive sensing demonstrate the effectiveness of the proposed method.

79 citations

Journal ArticleDOI
TL;DR: In the proposed scheme, the DOAs of the uncorrelated targets are first estimated using subspace-based methods, whereas those of the coherent targets are resolved using Bayesian compressive sensing, achieving improved DOA estimation accuracy with a flexible coprime array configuration.

79 citations

Journal ArticleDOI
TL;DR: A novel pruned orthogonal matching pursuit (POMP) algorithm is proposed, in which the pruning operation is embedded into the iterative process of the Orthogonal Matching Pursuit algorithm.
Abstract: The rotation, vibration, or coning motion of a target may produce periodic Doppler modulation, which is called the micro-Doppler phenomenon and is widely used for target classification and recognition. In this paper, the signal of interest is decomposed into a family of parametric basis-signals that are generated by discretizing the micro-Doppler parameter domain and synthesizing the micro-Doppler components with over-complete time–frequency characteristics. In this manner, micro-Doppler parameter estimation is converted into the problem of sparse signal recovery with a parametric dictionary. This problem can be considered as a specific case of dictionary learning, i.e., we need to solve for both the sparse solution and the parameter inside the dictionary matrix. To solve this problem, a novel pruned orthogonal matching pursuit (POMP) algorithm is proposed, in which the pruning operation is embedded into the iterative process of the orthogonal matching pursuit (OMP) algorithm. The effectiveness of the proposed approach is validated by simulations.

79 citations

References
More filters
Book
01 Jan 1983

34,729 citations

Book
D.L. Donoho1
01 Jan 2004
TL;DR: It is possible to design n=O(Nlog(m)) nonadaptive measurements allowing reconstruction with accuracy comparable to that attainable with direct knowledge of the N most important coefficients, and a good approximation to those N important coefficients is extracted from the n measurements by solving a linear program-Basis Pursuit in signal processing.
Abstract: Suppose x is an unknown vector in Ropfm (a digital image or signal); we plan to measure n general linear functionals of x and then reconstruct. If x is known to be compressible by transform coding with a known transform, and we reconstruct via the nonlinear procedure defined here, the number of measurements n can be dramatically smaller than the size m. Thus, certain natural classes of images with m pixels need only n=O(m1/4log5/2(m)) nonadaptive nonpixel samples for faithful recovery, as opposed to the usual m pixel samples. More specifically, suppose x has a sparse representation in some orthonormal basis (e.g., wavelet, Fourier) or tight frame (e.g., curvelet, Gabor)-so the coefficients belong to an lscrp ball for 0

18,609 citations

Journal ArticleDOI
TL;DR: Basis Pursuit (BP) is a principle for decomposing a signal into an "optimal" superposition of dictionary elements, where optimal means having the smallest l1 norm of coefficients among all such decompositions.
Abstract: The time-frequency and time-scale communities have recently developed a large number of overcomplete waveform dictionaries --- stationary wavelets, wavelet packets, cosine packets, chirplets, and warplets, to name a few. Decomposition into overcomplete systems is not unique, and several methods for decomposition have been proposed, including the method of frames (MOF), Matching pursuit (MP), and, for special dictionaries, the best orthogonal basis (BOB). Basis Pursuit (BP) is a principle for decomposing a signal into an "optimal" superposition of dictionary elements, where optimal means having the smallest l1 norm of coefficients among all such decompositions. We give examples exhibiting several advantages over MOF, MP, and BOB, including better sparsity and superresolution. BP has interesting relations to ideas in areas as diverse as ill-posed problems, in abstract harmonic analysis, total variation denoising, and multiscale edge denoising. BP in highly overcomplete dictionaries leads to large-scale optimization problems. With signals of length 8192 and a wavelet packet dictionary, one gets an equivalent linear program of size 8192 by 212,992. Such problems can be attacked successfully only because of recent advances in linear programming by interior-point methods. We obtain reasonable success with a primal-dual logarithmic barrier method and conjugate-gradient solver.

9,950 citations

Journal ArticleDOI
TL;DR: The authors introduce an algorithm, called matching pursuit, that decomposes any signal into a linear expansion of waveforms that are selected from a redundant dictionary of functions, chosen in order to best match the signal structures.
Abstract: The authors introduce an algorithm, called matching pursuit, that decomposes any signal into a linear expansion of waveforms that are selected from a redundant dictionary of functions. These waveforms are chosen in order to best match the signal structures. Matching pursuits are general procedures to compute adaptive signal representations. With a dictionary of Gabor functions a matching pursuit defines an adaptive time-frequency transform. They derive a signal energy distribution in the time-frequency plane, which does not include interference terms, unlike Wigner and Cohen class distributions. A matching pursuit isolates the signal structures that are coherent with respect to a given dictionary. An application to pattern extraction from noisy signals is described. They compare a matching pursuit decomposition with a signal expansion over an optimized wavepacket orthonormal basis, selected with the algorithm of Coifman and Wickerhauser see (IEEE Trans. Informat. Theory, vol. 38, Mar. 1992). >

9,380 citations

Journal ArticleDOI
TL;DR: A publicly available algorithm that requires only the same order of magnitude of computational effort as ordinary least squares applied to the full set of covariates is described.
Abstract: The purpose of model selection algorithms such as All Subsets, Forward Selection and Backward Elimination is to choose a linear model on the basis of the same set of data to which the model will be applied. Typically we have available a large collection of possible covariates from which we hope to select a parsimonious set for the efficient prediction of a response variable. Least Angle Regression (LARS), a new model selection algorithm, is a useful and less greedy version of traditional forward selection methods. Three main properties are derived: (1) A simple modification of the LARS algorithm implements the Lasso, an attractive version of ordinary least squares that constrains the sum of the absolute regression coefficients; the LARS modification calculates all possible Lasso estimates for a given problem, using an order of magnitude less computer time than previous methods. (2) A different LARS modification efficiently implements Forward Stagewise linear regression, another promising new model selection method; this connection explains the similar numerical results previously observed for the Lasso and Stagewise, and helps us understand the properties of both methods, which are seen as constrained versions of the simpler LARS algorithm. (3) A simple approximation for the degrees of freedom of a LARS estimate is available, from which we derive a Cp estimate of prediction error; this allows a principled choice among the range of possible LARS estimates. LARS and its variants are computationally efficient: the paper describes a publicly available algorithm that requires only the same order of magnitude of computational effort as ordinary least squares applied to the full set of covariates.

7,828 citations