scispace - formally typeset
Search or ask a question

Signal Recovery from Random Measurements Via Orthogonal Matching Pursuit: The Gaussian Case

TL;DR: In this paper, a greedy algorithm called Orthogonal Matching Pursuit (OMP) was proposed to recover a signal with m nonzero entries in dimension 1 given O(m n d) random linear measurements of that signal.
Abstract: This report demonstrates theoretically and empirically that a greedy algorithm called Orthogonal Matching Pursuit (OMP) can reliably recover a signal with m nonzero entries in dimension d given O(mln d) random linear measurements of that signal. This is a massive improvement over previous results, which require O(m2) measurements. The new results for OMP are comparable with recent results for another approach called Basis Pursuit (BP). In some settings, the OMP algorithm is faster and easier to implement, so it is an attractive alternative to BP for signal recovery problems.

Content maybe subject to copyright    Report

Citations
More filters
Proceedings ArticleDOI
13 Jun 2010
TL;DR: It is shown that with hashing, the sparse representation can be recovered with a high probability because hashing preserves the restrictive isometry property and is presented a theoretical analysis on the recognition rate.
Abstract: We propose a face recognition approach based on hashing. The approach yields comparable recognition rates with the random l 1 approach [18], which is considered the state-of-the-art. But our method is much faster: it is up to 150 times faster than [18] on the YaleB dataset. We show that with hashing, the sparse representation can be recovered with a high probability because hashing preserves the restrictive isometry property. Moreover, we present a theoretical analysis on the recognition rate of the proposed hashing approach. Experiments show a very competitive recognition rate and significant speedup compared with the state-of-the-art.

61 citations

Journal ArticleDOI
TL;DR: Numerical results indicate that the proposed range training-free detector offers improved detection performance over covariance matrix based detectors when the latter are provided with a moderate amount of training signals.
Abstract: This paper examines moving target detection in distributed multi-input multi-output radar with sensors placed on moving platforms. Unlike previous works which were focused on stationary platforms, we consider explicitly the effects of platform motion, which exacerbate the location-induced clutter non-homogeneity inherent in such systems and thus make the problem significantly more challenging. Two new detectors are proposed. The first is a sparsity based detector which, by exploiting a sparse representation of the clutter in the Doppler domain, adaptively estimates from the test signal the clutter subspace, which is in general distinct for different transmit/receive pairs and, moreover, may spread over the entire Doppler bandwidth. The second is a fully adaptive parametric detector which employs a parametric autoregressive clutter model and offers joint model order selection, clutter estimation/mitigation, and target detection in an integrated and fully adaptive process. Both detectors are developed within the generalized likelihood ratio test (GLRT) framework, obviating the need for training signals that are indispensable for conventional detectors but are difficult to obtain in practice due to clutter non-homogeneity. Numerical results indicate that the proposed training-free detectors offer improved detection performance over covariance matrix based detectors when the latter have a moderate amount of training signals.

61 citations

Journal ArticleDOI
M. Stojnic1
TL;DR: Sharp lower bounds are determined on the values of allowable sparsity for any given number (proportional to the length of the unknown vector) of equations for the case of the so-called block-sparse unknown vectors considered in "On the reconstruction of block-Sparse signals with an optimal number of measurements."
Abstract: It has been known for a while that l1-norm relaxation can in certain cases solve an under-determined system of linear equations. Recently, E. Candes ("Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information," IEEE Trans. Information Theory, vol. 52, no. 12, pp. 489-509, Dec. 2006) and D. Donoho ("High-dimensional centrally symmetric polytopes with neighborlines proportional to dimension," Disc. Comput. Geometry, vol. 35, no. 4, pp. 617-652, 2006) proved (in a large dimensional and statistical context) that if the number of equations (measurements in the compressed sensing terminology) in the system is proportional to the length of the unknown vector then there is a sparsity (number of nonzero elements of the unknown vector) also proportional to the length of the unknown vector such that l1-norm relaxation succeeds in solving the system. In this paper, in a large dimensional and statistical context, we determine sharp lower bounds on the values of allowable sparsity for any given number (proportional to the length of the unknown vector) of equations for the case of the so-called block-sparse unknown vectors considered in "On the reconstruction of block-sparse signals with an optimal number of measurements," (M. Stojnic et al., IEEE Trans, Signal Processing, submitted for publication.

61 citations

Journal ArticleDOI
TL;DR: An enhanced ensemble based ELM and SRC algorithm that incorporates multiple ensembles to enhance the reliability of the classifier and win a better classification performance with a lower computational complexity than the ELM-SRC approach.
Abstract: Extreme learning machine (ELM) combining with sparse representation classification (ELM-SRC) has been developed for image classification recently. However, employing a single ELM network with random hidden parameters may lead to unstable generalization and data partition performance in ELM-SRC. To alleviate this deficiency, we propose an enhanced ensemble based ELM and SRC algorithm (En-SRC) in this paper. Rather than using the output of a single ELM to decide the threshold for data partition, En-SRC incorporates multiple ensembles to enhance the reliability of the classifier. Different from ELM-SRC, a theoretical analysis on the data partition threshold selection of En-SRC is given. Extension to the ensemble based regularized ELM with SRC (EnR-SRC) is also presented in the paper. Experiments on a number of benchmark classification databases show that the proposed methods win a better classification performance with a lower computational complexity than the ELM-SRC approach.

61 citations

Journal ArticleDOI
TL;DR: The proposed CS system yields in general a much improved performance than those designed using previous methods in terms of peak signal-to-noise ratio for the application to image compression.
Abstract: This paper deals with alternating optimization of sensing matrix and sparsifying dictionary for compressed sensing systems. Under the same framework proposed by J. M. Duarte-Carvajalino and G. Sapiro, a novel algorithm for optimal sparsifying dictionary design is derived with an optimized sensing matrix embedded. A closed-form solution to the optimal dictionary design problem is obtained. A new measure is proposed for optimizing sensing matrix and an algorithm is developed for solving the corresponding optimization problem. Experiments are carried out with synthetic data and real images, which demonstrate promising performance of the proposed algorithms and superiority of the CS system designed with the optimized sensing matrix and dictionary to existing ones in terms of signal reconstruction accuracy. Particularly, the proposed CS system yields in general a much improved performance than those designed using previous methods in terms of peak signal-to-noise ratio for the application to image compression.

61 citations

References
More filters
Book
01 Jan 1983

34,729 citations

Book
D.L. Donoho1
01 Jan 2004
TL;DR: It is possible to design n=O(Nlog(m)) nonadaptive measurements allowing reconstruction with accuracy comparable to that attainable with direct knowledge of the N most important coefficients, and a good approximation to those N important coefficients is extracted from the n measurements by solving a linear program-Basis Pursuit in signal processing.
Abstract: Suppose x is an unknown vector in Ropfm (a digital image or signal); we plan to measure n general linear functionals of x and then reconstruct. If x is known to be compressible by transform coding with a known transform, and we reconstruct via the nonlinear procedure defined here, the number of measurements n can be dramatically smaller than the size m. Thus, certain natural classes of images with m pixels need only n=O(m1/4log5/2(m)) nonadaptive nonpixel samples for faithful recovery, as opposed to the usual m pixel samples. More specifically, suppose x has a sparse representation in some orthonormal basis (e.g., wavelet, Fourier) or tight frame (e.g., curvelet, Gabor)-so the coefficients belong to an lscrp ball for 0

18,609 citations

Journal ArticleDOI
TL;DR: Basis Pursuit (BP) is a principle for decomposing a signal into an "optimal" superposition of dictionary elements, where optimal means having the smallest l1 norm of coefficients among all such decompositions.
Abstract: The time-frequency and time-scale communities have recently developed a large number of overcomplete waveform dictionaries --- stationary wavelets, wavelet packets, cosine packets, chirplets, and warplets, to name a few. Decomposition into overcomplete systems is not unique, and several methods for decomposition have been proposed, including the method of frames (MOF), Matching pursuit (MP), and, for special dictionaries, the best orthogonal basis (BOB). Basis Pursuit (BP) is a principle for decomposing a signal into an "optimal" superposition of dictionary elements, where optimal means having the smallest l1 norm of coefficients among all such decompositions. We give examples exhibiting several advantages over MOF, MP, and BOB, including better sparsity and superresolution. BP has interesting relations to ideas in areas as diverse as ill-posed problems, in abstract harmonic analysis, total variation denoising, and multiscale edge denoising. BP in highly overcomplete dictionaries leads to large-scale optimization problems. With signals of length 8192 and a wavelet packet dictionary, one gets an equivalent linear program of size 8192 by 212,992. Such problems can be attacked successfully only because of recent advances in linear programming by interior-point methods. We obtain reasonable success with a primal-dual logarithmic barrier method and conjugate-gradient solver.

9,950 citations

Journal ArticleDOI
TL;DR: The authors introduce an algorithm, called matching pursuit, that decomposes any signal into a linear expansion of waveforms that are selected from a redundant dictionary of functions, chosen in order to best match the signal structures.
Abstract: The authors introduce an algorithm, called matching pursuit, that decomposes any signal into a linear expansion of waveforms that are selected from a redundant dictionary of functions. These waveforms are chosen in order to best match the signal structures. Matching pursuits are general procedures to compute adaptive signal representations. With a dictionary of Gabor functions a matching pursuit defines an adaptive time-frequency transform. They derive a signal energy distribution in the time-frequency plane, which does not include interference terms, unlike Wigner and Cohen class distributions. A matching pursuit isolates the signal structures that are coherent with respect to a given dictionary. An application to pattern extraction from noisy signals is described. They compare a matching pursuit decomposition with a signal expansion over an optimized wavepacket orthonormal basis, selected with the algorithm of Coifman and Wickerhauser see (IEEE Trans. Informat. Theory, vol. 38, Mar. 1992). >

9,380 citations

Journal ArticleDOI
TL;DR: A publicly available algorithm that requires only the same order of magnitude of computational effort as ordinary least squares applied to the full set of covariates is described.
Abstract: The purpose of model selection algorithms such as All Subsets, Forward Selection and Backward Elimination is to choose a linear model on the basis of the same set of data to which the model will be applied. Typically we have available a large collection of possible covariates from which we hope to select a parsimonious set for the efficient prediction of a response variable. Least Angle Regression (LARS), a new model selection algorithm, is a useful and less greedy version of traditional forward selection methods. Three main properties are derived: (1) A simple modification of the LARS algorithm implements the Lasso, an attractive version of ordinary least squares that constrains the sum of the absolute regression coefficients; the LARS modification calculates all possible Lasso estimates for a given problem, using an order of magnitude less computer time than previous methods. (2) A different LARS modification efficiently implements Forward Stagewise linear regression, another promising new model selection method; this connection explains the similar numerical results previously observed for the Lasso and Stagewise, and helps us understand the properties of both methods, which are seen as constrained versions of the simpler LARS algorithm. (3) A simple approximation for the degrees of freedom of a LARS estimate is available, from which we derive a Cp estimate of prediction error; this allows a principled choice among the range of possible LARS estimates. LARS and its variants are computationally efficient: the paper describes a publicly available algorithm that requires only the same order of magnitude of computational effort as ordinary least squares applied to the full set of covariates.

7,828 citations