scispace - formally typeset
Search or ask a question

Signal Recovery from Random Measurements Via Orthogonal Matching Pursuit: The Gaussian Case

TL;DR: In this paper, a greedy algorithm called Orthogonal Matching Pursuit (OMP) was proposed to recover a signal with m nonzero entries in dimension 1 given O(m n d) random linear measurements of that signal.
Abstract: This report demonstrates theoretically and empirically that a greedy algorithm called Orthogonal Matching Pursuit (OMP) can reliably recover a signal with m nonzero entries in dimension d given O(mln d) random linear measurements of that signal. This is a massive improvement over previous results, which require O(m2) measurements. The new results for OMP are comparable with recent results for another approach called Basis Pursuit (BP). In some settings, the OMP algorithm is faster and easier to implement, so it is an attractive alternative to BP for signal recovery problems.

Content maybe subject to copyright    Report

Citations
More filters
Proceedings ArticleDOI
15 Aug 2008
TL;DR: This paper analyzes compressed sensing using tools from coding theory because CS can also be viewed as syndrome-based source coding of sparse vectors using linear codes over real numbers.
Abstract: Compressed sensing (CS) is a relatively new area of signal processing and statistics that focuses on signal reconstruction from a small number of linear (e.g., dot product) measurements. In this paper, we analyze CS using tools from coding theory because CS can also be viewed as syndrome-based source coding of sparse vectors using linear codes over real numbers. While coding theory does not typically deal with codes over real numbers, there is actually a very close relationship between CS and error-correcting codes over large discrete alphabets. This connection leads naturally to new reconstruction methods and analysis. In some cases, the resulting methods provably require many fewer measurements than previous approaches.

59 citations

Journal ArticleDOI
TL;DR: This framework can be viewed as a collection of practical sparsity-driven preprocessing algorithms for radar applications that restores and denoises raw radar signals at each aperture position independently, leading to a significant reduction in the memory requirement as well as the computational complexity of the sparse recovery process.
Abstract: This paper presents a simple yet very effective time-domain sparse representation and associated sparse recovery techniques that can robustly process raw data-intensive ultra-wideband (UWB) synthetic aperture radar (SAR) records in challenging noisy and bandwidth management environments. Unlike most previous approaches in compressed sensing for radar in general and SAR in particular, we take advantage of the sparsity of the scene and the correlation between the transmitted and received signal directly in the raw time domain even before attempting image formation. Our framework can be viewed as a collection of practical sparsity-driven preprocessing algorithms for radar applications that restores and denoises raw radar signals at each aperture position independently, leading to a significant reduction in the memory requirement as well as the computational complexity of the sparse recovery process. Recovery results from real-world data collected by the U.S. Army Research Laboratory (ARL) UWB SAR systems illustrate the robustness and effectiveness of our proposed framework on two critical applications: 1) recovery of missing spectral information in multiple frequency bands and 2) adaptive extraction and/or suppression of radio frequency interference (RFI) signals from SAR data records.

59 citations

Proceedings ArticleDOI
28 Jan 2013
TL;DR: A novel thresholding method is used to reduce the processing time for the optimization problem by at least 25 % and the proposed architecture reconstructs a 256-length signal with maximum sparsity of 8 and using 64 measurements.
Abstract: Compressive sensing (CS) is a novel technology which allows sampling of sparse signals under sub-Nyquist rate and reconstructing the image using computational intensive algorithms. Reconstruction algorithms are complex and software implementation of these algorithms is extremely slow and power consuming. In this paper, a low complexity architecture for the reconstruction of compressively sampled signals is proposed. The algorithm used here is Orthogonal Matching Pursuit (OMP) which can be divided into two major processes: optimization problem and least square problem. The most complex part of OMP is to solve the least square problem and a scalable Q-R decomposition (QRD) core is implemented to perform this. A novel thresholding method is used to reduce the processing time for the optimization problem by at least 25 %. The proposed architecture reconstructs a 256-length signal with maximum sparsity of 8 and using 64 measurements. Implementation on Xilinx Virtex-5 FPGA runs at two clock rates (85 MHz and 69 MHz), and occupies an area of 15% slices and 80% DSP cores. The total reconstruction for a 128-length signal takes 7.13 μs which is 3.4 times faster than the state-of-art-implementation.

59 citations

Proceedings ArticleDOI
01 Nov 2009
TL;DR: A new algorithm to generate a super-resolution image from a single, low-resolution input without the use of a training data set is proposed by exploiting the fact that the image is highly compressible in the wavelet domain and leveraging recent results of compressed sensing theory.
Abstract: This paper proposes a new algorithm to generate a super-resolution image from a single, low-resolution input without the use of a training data set. We do this by exploiting the fact that the image is highly compressible in the wavelet domain and leverage recent results of compressed sensing (CS) theory to make an accurate estimate of the original high-resolution image. Unfortunately, traditional CS approaches do not allow direct use of a wavelet compression basis because of the coherency between the point-samples from the downsampling process and the wavelet basis. To overcome this problem, we incorporate the downsampling low-pass filter into our measurement matrix, which decreases coherency between the bases. To invert the downsampling process, we use the appropriate inverse filter and solve for the high-resolution image using a greedy, matching-pursuit algorithm. The result is a simple and efficient algorithm that can generate high quality, high-resolution images without the use of training data. We present results that show the improved performance of our method over existing super-resolution approaches.

59 citations

Proceedings Article
27 Sep 2012
TL;DR: The resting state with closed eyes acquisition protocol has been here used and deeply investigated by varying the employed electrodes configuration both in number and location for optimizing the recognition performance still guaranteeing sufficient user convenience.
Abstract: In this paper EEG signals are employed for the purpose of automatic user recognition. Specifically the resting state with closed eyes acquisition protocol has been here used and deeply investigated by varying the employed electrodes configuration both in number and location for optimizing the recognition performance still guaranteeing sufficient user convenience. A database of 45 healthy subjects has been employed in the analysis. Autoregressive stochastic modeling and polynomial regression based classification has been applied to extracted brain rhythms in order to identify the most distinctive contributions of the different subbands in the recognition process. Our analysis has shown that significantly high recognition rates, up to 98.73%, can be achievedwhen using proper triplets of electrodes,which cannot be achieved by employing couple of electrodes,whereas sets of five electrodes in the central posterior region of the scalp can guarantee very high recognition performance while limiting user convenience.

59 citations

References
More filters
Book
01 Jan 1983

34,729 citations

Book
D.L. Donoho1
01 Jan 2004
TL;DR: It is possible to design n=O(Nlog(m)) nonadaptive measurements allowing reconstruction with accuracy comparable to that attainable with direct knowledge of the N most important coefficients, and a good approximation to those N important coefficients is extracted from the n measurements by solving a linear program-Basis Pursuit in signal processing.
Abstract: Suppose x is an unknown vector in Ropfm (a digital image or signal); we plan to measure n general linear functionals of x and then reconstruct. If x is known to be compressible by transform coding with a known transform, and we reconstruct via the nonlinear procedure defined here, the number of measurements n can be dramatically smaller than the size m. Thus, certain natural classes of images with m pixels need only n=O(m1/4log5/2(m)) nonadaptive nonpixel samples for faithful recovery, as opposed to the usual m pixel samples. More specifically, suppose x has a sparse representation in some orthonormal basis (e.g., wavelet, Fourier) or tight frame (e.g., curvelet, Gabor)-so the coefficients belong to an lscrp ball for 0

18,609 citations

Journal ArticleDOI
TL;DR: Basis Pursuit (BP) is a principle for decomposing a signal into an "optimal" superposition of dictionary elements, where optimal means having the smallest l1 norm of coefficients among all such decompositions.
Abstract: The time-frequency and time-scale communities have recently developed a large number of overcomplete waveform dictionaries --- stationary wavelets, wavelet packets, cosine packets, chirplets, and warplets, to name a few. Decomposition into overcomplete systems is not unique, and several methods for decomposition have been proposed, including the method of frames (MOF), Matching pursuit (MP), and, for special dictionaries, the best orthogonal basis (BOB). Basis Pursuit (BP) is a principle for decomposing a signal into an "optimal" superposition of dictionary elements, where optimal means having the smallest l1 norm of coefficients among all such decompositions. We give examples exhibiting several advantages over MOF, MP, and BOB, including better sparsity and superresolution. BP has interesting relations to ideas in areas as diverse as ill-posed problems, in abstract harmonic analysis, total variation denoising, and multiscale edge denoising. BP in highly overcomplete dictionaries leads to large-scale optimization problems. With signals of length 8192 and a wavelet packet dictionary, one gets an equivalent linear program of size 8192 by 212,992. Such problems can be attacked successfully only because of recent advances in linear programming by interior-point methods. We obtain reasonable success with a primal-dual logarithmic barrier method and conjugate-gradient solver.

9,950 citations

Journal ArticleDOI
TL;DR: The authors introduce an algorithm, called matching pursuit, that decomposes any signal into a linear expansion of waveforms that are selected from a redundant dictionary of functions, chosen in order to best match the signal structures.
Abstract: The authors introduce an algorithm, called matching pursuit, that decomposes any signal into a linear expansion of waveforms that are selected from a redundant dictionary of functions. These waveforms are chosen in order to best match the signal structures. Matching pursuits are general procedures to compute adaptive signal representations. With a dictionary of Gabor functions a matching pursuit defines an adaptive time-frequency transform. They derive a signal energy distribution in the time-frequency plane, which does not include interference terms, unlike Wigner and Cohen class distributions. A matching pursuit isolates the signal structures that are coherent with respect to a given dictionary. An application to pattern extraction from noisy signals is described. They compare a matching pursuit decomposition with a signal expansion over an optimized wavepacket orthonormal basis, selected with the algorithm of Coifman and Wickerhauser see (IEEE Trans. Informat. Theory, vol. 38, Mar. 1992). >

9,380 citations

Journal ArticleDOI
TL;DR: A publicly available algorithm that requires only the same order of magnitude of computational effort as ordinary least squares applied to the full set of covariates is described.
Abstract: The purpose of model selection algorithms such as All Subsets, Forward Selection and Backward Elimination is to choose a linear model on the basis of the same set of data to which the model will be applied. Typically we have available a large collection of possible covariates from which we hope to select a parsimonious set for the efficient prediction of a response variable. Least Angle Regression (LARS), a new model selection algorithm, is a useful and less greedy version of traditional forward selection methods. Three main properties are derived: (1) A simple modification of the LARS algorithm implements the Lasso, an attractive version of ordinary least squares that constrains the sum of the absolute regression coefficients; the LARS modification calculates all possible Lasso estimates for a given problem, using an order of magnitude less computer time than previous methods. (2) A different LARS modification efficiently implements Forward Stagewise linear regression, another promising new model selection method; this connection explains the similar numerical results previously observed for the Lasso and Stagewise, and helps us understand the properties of both methods, which are seen as constrained versions of the simpler LARS algorithm. (3) A simple approximation for the degrees of freedom of a LARS estimate is available, from which we derive a Cp estimate of prediction error; this allows a principled choice among the range of possible LARS estimates. LARS and its variants are computationally efficient: the paper describes a publicly available algorithm that requires only the same order of magnitude of computational effort as ordinary least squares applied to the full set of covariates.

7,828 citations