scispace - formally typeset
Search or ask a question

Signal Recovery from Random Measurements Via Orthogonal Matching Pursuit: The Gaussian Case

TL;DR: In this paper, a greedy algorithm called Orthogonal Matching Pursuit (OMP) was proposed to recover a signal with m nonzero entries in dimension 1 given O(m n d) random linear measurements of that signal.
Abstract: This report demonstrates theoretically and empirically that a greedy algorithm called Orthogonal Matching Pursuit (OMP) can reliably recover a signal with m nonzero entries in dimension d given O(mln d) random linear measurements of that signal. This is a massive improvement over previous results, which require O(m2) measurements. The new results for OMP are comparable with recent results for another approach called Basis Pursuit (BP). In some settings, the OMP algorithm is faster and easier to implement, so it is an attractive alternative to BP for signal recovery problems.

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
TL;DR: A modification of standard compressive sensing algorithms for sparse signal reconstruction in the presence of impulse noise is proposed based on the L-estimate statistics which is used to provide appropriate initial conditions that lead to improved performance and efficient convergence of the reconstruction algorithms.

49 citations

Journal ArticleDOI
TL;DR: The sparsity of the fluorescent sources is considered as the a priori information and is promoted by incorporating L1 regularization and a reconstruction algorithm based on stagewise orthogonal matching pursuit is proposed, which treats the FMT problem as the basis pursuit problem.
Abstract: Fluorescence molecular tomography (FMT) is a promising technique for in vivo small animal imaging. In this paper, the sparsity of the fluorescent sources is considered as the a priori information and is promoted by incorporating L1 regularization. Then a reconstruction algorithm based on stagewise orthogonal matching pursuit is proposed, which treats the FMT problem as the basis pursuit problem. To evaluate this method, we compare it to the iterated-shrinkage-based algorithm with L1 regularization. Numerical simulations and physical experiments show that the proposed method can obtain comparable or even slightly better results. More importantly, the proposed method was at least 2 orders of magnitude faster in these experiments, which makes it a practical reconstruction algorithm.

48 citations

Proceedings ArticleDOI
27 Jun 2016
TL;DR: A discriminative sparse coding method which jointly learns a dictionary for sparse coding and an ensemble classifier for discrimination, which outperformed several recent methods in several recognition tasks.
Abstract: Discriminative sparse coding has emerged as a promising technique in image analysis and recognition, which couples the process of classifier training and the process of dictionary learning for improving the discriminability of sparse codes. Many existing approaches consider only a simple single linear classifier whose discriminative power is rather weak. In this paper, we proposed a discriminative sparse coding method which jointly learns a dictionary for sparse coding and an ensemble classifier for discrimination. The ensemble classifier is composed of a set of linear predictors and constructed via both subsampling on data and subspace projection on sparse codes. The advantages of the proposed method over the existing ones are multi-fold: better discriminability of sparse codes, weaker dependence on peculiarities of training data, and more expressibility of classifier for classification. These advantages are also justified in the experiments, as our method outperformed several recent methods in several recognition tasks.

48 citations

Journal ArticleDOI
TL;DR: A compressibility-based clustering algorithm (CBCA) for hierarchical compressive data gathering (HCDG) that requires significantly less data transmission than the random clustering method with a little recovery accuracy loss is presented.
Abstract: Data gathering in wireless sensor networks (WSNs) is one of the major sources for power consumption. Compression is often employed to reduce the number of packet transmissions required for data gathering. However, conventional data compression techniques can introduce heavy in-node computation, and thus, the use of compressive sensing (CS) for WSN data gathering has recently attracted growing attention. Among existing CS-based data gathering approaches, hierarchical compressive data gathering (HCDG) methods currently offer the most transmission-efficient architectures. When employing HCDG, clustering algorithms can affect the number of data transmissions. Most existing HCDG works use the random clustering (RC) method as a clustering algorithm, which can produce significant number of transmissions in some cases. In this paper, we present a compressibility-based clustering algorithm (CBCA) for HCDG. In CBCA, the network topology is first converted into a logical chain, similar to the idea proposed in PEGASIS [1] , and then the spatial correlation of the cluster nodes’ readings are employed for CS. We show that CBCA requires significantly less data transmission than the RC method with a little recovery accuracy loss. We also identify optimal parameters of CBCA via mathematical analysis and validate them by simulation. Finally, we used water level data collected from a real-world flood inundation monitoring system to drive our simulation experiments and showed the effectiveness of CBCA.

48 citations

Journal ArticleDOI
TL;DR: The proposed method uses an iterative algorithm, which cycles through steps of target reconstruction and observation position error estimation and compensation, which can estimate the observation position errors accurately, and the reconstruction quality of the target images can be improved significantly.
Abstract: Compressed sensing (CS) based radar imaging requires the use of a mathematical model of the observation process. Inaccuracies in the observation model may cause defocusing in the reconstructed images. In the observation process, the observation positions are usually not known perfectly. Imperfect knowledge of the observation positions is a major source of model errors in imaging. In this paper, a method is proposed to compensate the observation position errors in CS-based radar imaging. Instead of treating the observation-position-induced model errors as phase errors in the data, the proposed method can determine the observation position errors as part of the imaging process. It uses an iterative algorithm, which cycles through steps of target reconstruction and observation position error estimation and compensation. The proposed method can estimate the observation position errors accurately, and the reconstruction quality of the target images can be improved significantly. Simulation results and experimental results from rail-mounted radar and airborne synthetic aperture radar are presented to show the effectiveness of the proposed method.

48 citations

References
More filters
Book
01 Jan 1983

34,729 citations

Book
D.L. Donoho1
01 Jan 2004
TL;DR: It is possible to design n=O(Nlog(m)) nonadaptive measurements allowing reconstruction with accuracy comparable to that attainable with direct knowledge of the N most important coefficients, and a good approximation to those N important coefficients is extracted from the n measurements by solving a linear program-Basis Pursuit in signal processing.
Abstract: Suppose x is an unknown vector in Ropfm (a digital image or signal); we plan to measure n general linear functionals of x and then reconstruct. If x is known to be compressible by transform coding with a known transform, and we reconstruct via the nonlinear procedure defined here, the number of measurements n can be dramatically smaller than the size m. Thus, certain natural classes of images with m pixels need only n=O(m1/4log5/2(m)) nonadaptive nonpixel samples for faithful recovery, as opposed to the usual m pixel samples. More specifically, suppose x has a sparse representation in some orthonormal basis (e.g., wavelet, Fourier) or tight frame (e.g., curvelet, Gabor)-so the coefficients belong to an lscrp ball for 0

18,609 citations

Journal ArticleDOI
TL;DR: Basis Pursuit (BP) is a principle for decomposing a signal into an "optimal" superposition of dictionary elements, where optimal means having the smallest l1 norm of coefficients among all such decompositions.
Abstract: The time-frequency and time-scale communities have recently developed a large number of overcomplete waveform dictionaries --- stationary wavelets, wavelet packets, cosine packets, chirplets, and warplets, to name a few. Decomposition into overcomplete systems is not unique, and several methods for decomposition have been proposed, including the method of frames (MOF), Matching pursuit (MP), and, for special dictionaries, the best orthogonal basis (BOB). Basis Pursuit (BP) is a principle for decomposing a signal into an "optimal" superposition of dictionary elements, where optimal means having the smallest l1 norm of coefficients among all such decompositions. We give examples exhibiting several advantages over MOF, MP, and BOB, including better sparsity and superresolution. BP has interesting relations to ideas in areas as diverse as ill-posed problems, in abstract harmonic analysis, total variation denoising, and multiscale edge denoising. BP in highly overcomplete dictionaries leads to large-scale optimization problems. With signals of length 8192 and a wavelet packet dictionary, one gets an equivalent linear program of size 8192 by 212,992. Such problems can be attacked successfully only because of recent advances in linear programming by interior-point methods. We obtain reasonable success with a primal-dual logarithmic barrier method and conjugate-gradient solver.

9,950 citations

Journal ArticleDOI
TL;DR: The authors introduce an algorithm, called matching pursuit, that decomposes any signal into a linear expansion of waveforms that are selected from a redundant dictionary of functions, chosen in order to best match the signal structures.
Abstract: The authors introduce an algorithm, called matching pursuit, that decomposes any signal into a linear expansion of waveforms that are selected from a redundant dictionary of functions. These waveforms are chosen in order to best match the signal structures. Matching pursuits are general procedures to compute adaptive signal representations. With a dictionary of Gabor functions a matching pursuit defines an adaptive time-frequency transform. They derive a signal energy distribution in the time-frequency plane, which does not include interference terms, unlike Wigner and Cohen class distributions. A matching pursuit isolates the signal structures that are coherent with respect to a given dictionary. An application to pattern extraction from noisy signals is described. They compare a matching pursuit decomposition with a signal expansion over an optimized wavepacket orthonormal basis, selected with the algorithm of Coifman and Wickerhauser see (IEEE Trans. Informat. Theory, vol. 38, Mar. 1992). >

9,380 citations

Journal ArticleDOI
TL;DR: A publicly available algorithm that requires only the same order of magnitude of computational effort as ordinary least squares applied to the full set of covariates is described.
Abstract: The purpose of model selection algorithms such as All Subsets, Forward Selection and Backward Elimination is to choose a linear model on the basis of the same set of data to which the model will be applied. Typically we have available a large collection of possible covariates from which we hope to select a parsimonious set for the efficient prediction of a response variable. Least Angle Regression (LARS), a new model selection algorithm, is a useful and less greedy version of traditional forward selection methods. Three main properties are derived: (1) A simple modification of the LARS algorithm implements the Lasso, an attractive version of ordinary least squares that constrains the sum of the absolute regression coefficients; the LARS modification calculates all possible Lasso estimates for a given problem, using an order of magnitude less computer time than previous methods. (2) A different LARS modification efficiently implements Forward Stagewise linear regression, another promising new model selection method; this connection explains the similar numerical results previously observed for the Lasso and Stagewise, and helps us understand the properties of both methods, which are seen as constrained versions of the simpler LARS algorithm. (3) A simple approximation for the degrees of freedom of a LARS estimate is available, from which we derive a Cp estimate of prediction error; this allows a principled choice among the range of possible LARS estimates. LARS and its variants are computationally efficient: the paper describes a publicly available algorithm that requires only the same order of magnitude of computational effort as ordinary least squares applied to the full set of covariates.

7,828 citations