scispace - formally typeset
Search or ask a question

Signal Recovery from Random Measurements Via Orthogonal Matching Pursuit: The Gaussian Case

TL;DR: In this paper, a greedy algorithm called Orthogonal Matching Pursuit (OMP) was proposed to recover a signal with m nonzero entries in dimension 1 given O(m n d) random linear measurements of that signal.
Abstract: This report demonstrates theoretically and empirically that a greedy algorithm called Orthogonal Matching Pursuit (OMP) can reliably recover a signal with m nonzero entries in dimension d given O(mln d) random linear measurements of that signal. This is a massive improvement over previous results, which require O(m2) measurements. The new results for OMP are comparable with recent results for another approach called Basis Pursuit (BP). In some settings, the OMP algorithm is faster and easier to implement, so it is an attractive alternative to BP for signal recovery problems.

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
TL;DR: A novel approach to motion parameter estimation with low pulse repetition frequency (PRF) sampling based on compressed sensing (CS) theory is introduced and Radon transform is adopted to obtain unambiguous across-track velocities and range positions.
Abstract: A novel approach to motion parameter estimation with low pulse repetition frequency (PRF) sampling based on compressed sensing (CS) theory is introduced. As is known to us, when PRF is less than the Doppler spectrum bandwidth, moving targets suffer both Doppler centroid frequency ambiguity and Doppler spectrum ambiguity. Under this condition, the traditional parameter estimation method in the Doppler domain is out of action. The key of this letter converts motion parameter estimation in the synthetic aperture radar system with low PRF sampling into solving an optimization equation based on CS theory. Because moving targets in the scene can be regarded as sparse signals after clutter cancellation, an optimization algorithm based on CS theory is proposed to reconstruct sparse signals and meanwhile estimate the along-track velocities and azimuth positions of moving targets. Considering the fact that range cell migration of moving targets is not subject to PRF limitations, Radon transform is adopted to obtain unambiguous across-track velocities and range positions. Results on simulation and real data are provided to show the effectiveness of this method.

42 citations

Proceedings ArticleDOI
10 May 2010
TL;DR: This work imposes a Laplacian prior on the targets themselves which encourages sparsity in the resulting reconstruction of the angle/Doppler plane, and demonstrates that this approach allows closely spaced targets to be more easily distinguished.
Abstract: Traditional Space Time Adaptive Processing (STAP) formulations cast the problem as a detection task which results in an optimal decision statistic for a single target in colored Gaussian noise. In the present work, inspired by recent theoretical and algorithmic advances in the field known as compressed sensing, we impose a Laplacian prior on the targets themselves which encourages sparsity in the resulting reconstruction of the angle/Doppler plane. By casting the problem in a Bayesian framework, it becomes readily apparent that sparse regularization can be applied as a post-processing step after the use of a traditional STAP algorithm for clutter estimation. Simulation results demonstrate that this approach allows closely spaced targets to be more easily distinguished.

42 citations

Journal ArticleDOI
TL;DR: By employing the sparse signal reconstruction algorithms, ideal time-frequency representations are obtained and the presented theory is illustrated on several examples dealing with different auto-correlation functions and corresponding TFDs.
Abstract: The estimation of time-varying instantaneous frequency (IF) for monocomponent signals with an incomplete set of samples is considered. A suitable time-frequency distribution (TFD) reduces the non-stationary signal into a local sinusoid over the lag variable prior to the Fourier transform. Accordingly, the observed spectral content becomes sparse and suitable for compressive sensing reconstruction in the case of missing samples. Although the local bilinear or higher order auto-correlation functions will increase the number of the missing samples, the analysis shows that an accurate IF estimation can be achieved even if we deal with only few samples, as long as the auto-correlation function is properly chosen to coincide with the signals phase non-linearity. In addition, by employing the sparse signal reconstruction algorithms, ideal time-frequency representations are obtained. The presented theory is illustrated on several examples dealing with different auto-correlation functions and corresponding TFDs.

42 citations

Proceedings ArticleDOI
07 Dec 2011
TL;DR: The complexity of each module in the orthogonal matching pursuit (OMP) is analyzed and the bottlenecks of the OMP lie in the projection module and the least-squares module and Fujimoto's matrix-vector multiplication algorithm is adopted.
Abstract: Recovery algorithms play a key role in compressive sampling (CS). Currently, a popular recovery algorithm for CS is the orthogonal matching pursuit (OMP), which possesses the merits of low complexity and good recovery quality. Considering that the OMP involves massive matrix/vector operations, it is very suited to being implemented in parallel on graphics processing unit (GPU). In this paper, we first analyze the complexity of each module in the OMP and point out the bottlenecks of the OMP lie in the projection module and the least-squares module. To speedup the projection module, Fujimoto's matrix-vector multiplication algorithm is adopted. To speedup the least-squares module, the matrixinverse-update algorithm is adopted. Experimental results show that +40x speedup is achieved by our implementation of OMP on GTX480 GPU over on Intel(R) Core(TM) i7 CPU. Since the projection module occupies more than 2/3 of the total run time, we are looking for a faster matrix-vector multiplication algorithm.

42 citations

Journal ArticleDOI
TL;DR: Both theoretical and experimental analyses validate low overhead, confidentiality, and effective authentication of the proposed data acquisition framework for a number of industrial-informatics-based applications, such as IoT.
Abstract: In the presence of several critical issues during data acquisition in industrial-informatics-based applications, like Internet of Things (IoT) and smart grid, this article proposes a novel framework based on compressive sensing (CS) and a cascade chaotic system (CCS). This framework can ensure low overhead, confidentiality, and authentication. Based on CS and the CCS, three technologies, including CCS-driven CS, CCS-driven local perturbation, and authentication mechanism, are introduced in the proposed data acquisition framework in this article. CCS-driven CS generates the measurement matrix with chaotic initial conditions and avoids the transmission of a large-size measurement matrix. CCS-driven local perturbation only perturbs a small number of elements in the original measurement matrix for each sampling and avoids the regeneration of the large-size measurement matrix. The authentication mechanism employs the authentication password and the access password to deal with the passive tampering attack and the active tampering attack, respectively. Moreover, the permutation-diffusion structure is used to encrypt the obtained measurements to enhance the security. Both theoretical and experimental analyses validate low overhead, confidentiality, and effective authentication of the proposed data acquisition framework for a number of industrial-informatics-based applications, such as IoT.

42 citations

References
More filters
Book
01 Jan 1983

34,729 citations

Book
D.L. Donoho1
01 Jan 2004
TL;DR: It is possible to design n=O(Nlog(m)) nonadaptive measurements allowing reconstruction with accuracy comparable to that attainable with direct knowledge of the N most important coefficients, and a good approximation to those N important coefficients is extracted from the n measurements by solving a linear program-Basis Pursuit in signal processing.
Abstract: Suppose x is an unknown vector in Ropfm (a digital image or signal); we plan to measure n general linear functionals of x and then reconstruct. If x is known to be compressible by transform coding with a known transform, and we reconstruct via the nonlinear procedure defined here, the number of measurements n can be dramatically smaller than the size m. Thus, certain natural classes of images with m pixels need only n=O(m1/4log5/2(m)) nonadaptive nonpixel samples for faithful recovery, as opposed to the usual m pixel samples. More specifically, suppose x has a sparse representation in some orthonormal basis (e.g., wavelet, Fourier) or tight frame (e.g., curvelet, Gabor)-so the coefficients belong to an lscrp ball for 0

18,609 citations

Journal ArticleDOI
TL;DR: Basis Pursuit (BP) is a principle for decomposing a signal into an "optimal" superposition of dictionary elements, where optimal means having the smallest l1 norm of coefficients among all such decompositions.
Abstract: The time-frequency and time-scale communities have recently developed a large number of overcomplete waveform dictionaries --- stationary wavelets, wavelet packets, cosine packets, chirplets, and warplets, to name a few. Decomposition into overcomplete systems is not unique, and several methods for decomposition have been proposed, including the method of frames (MOF), Matching pursuit (MP), and, for special dictionaries, the best orthogonal basis (BOB). Basis Pursuit (BP) is a principle for decomposing a signal into an "optimal" superposition of dictionary elements, where optimal means having the smallest l1 norm of coefficients among all such decompositions. We give examples exhibiting several advantages over MOF, MP, and BOB, including better sparsity and superresolution. BP has interesting relations to ideas in areas as diverse as ill-posed problems, in abstract harmonic analysis, total variation denoising, and multiscale edge denoising. BP in highly overcomplete dictionaries leads to large-scale optimization problems. With signals of length 8192 and a wavelet packet dictionary, one gets an equivalent linear program of size 8192 by 212,992. Such problems can be attacked successfully only because of recent advances in linear programming by interior-point methods. We obtain reasonable success with a primal-dual logarithmic barrier method and conjugate-gradient solver.

9,950 citations

Journal ArticleDOI
TL;DR: The authors introduce an algorithm, called matching pursuit, that decomposes any signal into a linear expansion of waveforms that are selected from a redundant dictionary of functions, chosen in order to best match the signal structures.
Abstract: The authors introduce an algorithm, called matching pursuit, that decomposes any signal into a linear expansion of waveforms that are selected from a redundant dictionary of functions. These waveforms are chosen in order to best match the signal structures. Matching pursuits are general procedures to compute adaptive signal representations. With a dictionary of Gabor functions a matching pursuit defines an adaptive time-frequency transform. They derive a signal energy distribution in the time-frequency plane, which does not include interference terms, unlike Wigner and Cohen class distributions. A matching pursuit isolates the signal structures that are coherent with respect to a given dictionary. An application to pattern extraction from noisy signals is described. They compare a matching pursuit decomposition with a signal expansion over an optimized wavepacket orthonormal basis, selected with the algorithm of Coifman and Wickerhauser see (IEEE Trans. Informat. Theory, vol. 38, Mar. 1992). >

9,380 citations

Journal ArticleDOI
TL;DR: A publicly available algorithm that requires only the same order of magnitude of computational effort as ordinary least squares applied to the full set of covariates is described.
Abstract: The purpose of model selection algorithms such as All Subsets, Forward Selection and Backward Elimination is to choose a linear model on the basis of the same set of data to which the model will be applied. Typically we have available a large collection of possible covariates from which we hope to select a parsimonious set for the efficient prediction of a response variable. Least Angle Regression (LARS), a new model selection algorithm, is a useful and less greedy version of traditional forward selection methods. Three main properties are derived: (1) A simple modification of the LARS algorithm implements the Lasso, an attractive version of ordinary least squares that constrains the sum of the absolute regression coefficients; the LARS modification calculates all possible Lasso estimates for a given problem, using an order of magnitude less computer time than previous methods. (2) A different LARS modification efficiently implements Forward Stagewise linear regression, another promising new model selection method; this connection explains the similar numerical results previously observed for the Lasso and Stagewise, and helps us understand the properties of both methods, which are seen as constrained versions of the simpler LARS algorithm. (3) A simple approximation for the degrees of freedom of a LARS estimate is available, from which we derive a Cp estimate of prediction error; this allows a principled choice among the range of possible LARS estimates. LARS and its variants are computationally efficient: the paper describes a publicly available algorithm that requires only the same order of magnitude of computational effort as ordinary least squares applied to the full set of covariates.

7,828 citations