scispace - formally typeset
Search or ask a question

Signal Recovery from Random Measurements Via Orthogonal Matching Pursuit: The Gaussian Case

TL;DR: In this paper, a greedy algorithm called Orthogonal Matching Pursuit (OMP) was proposed to recover a signal with m nonzero entries in dimension 1 given O(m n d) random linear measurements of that signal.
Abstract: This report demonstrates theoretically and empirically that a greedy algorithm called Orthogonal Matching Pursuit (OMP) can reliably recover a signal with m nonzero entries in dimension d given O(mln d) random linear measurements of that signal. This is a massive improvement over previous results, which require O(m2) measurements. The new results for OMP are comparable with recent results for another approach called Basis Pursuit (BP). In some settings, the OMP algorithm is faster and easier to implement, so it is an attractive alternative to BP for signal recovery problems.

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
TL;DR: An algorithm for vehicle detection in high-resolution aerial images through a fast sparse representation classification method and a multiorder feature descriptor that contains information of texture, color, and high-order context is presented.
Abstract: This paper presents an algorithm for vehicle detection in high-resolution aerial images through a fast sparse representation classification method and a multiorder feature descriptor that contains information of texture, color, and high-order context. To speed up computation of sparse representation, a set of small dictionaries, instead of a large dictionary containing all training items, is used for classification. To extract the context information of a patch, we proposed a high-order context information extraction method based on the proposed fast sparse representation classification method. To effectively extract the color information, the RGB color space is transformed into color name space. Then, the color name information is embedded into the grids of histogram of oriented gradient feature to represent the low-order feature of vehicles. By combining low- and high-order features together, a multiorder feature is used to describe vehicles. We also proposed a sample selection strategy based on our fast sparse representation classification method to construct a complete training subset. Finally, a set of dictionaries, which are trained by the multiorder features of the selected training subset, is used to detect vehicles based on superpixel segmentation results of aerial images. Experimental results illustrate the satisfactory performance of our algorithm.

62 citations

Journal ArticleDOI
Chu He1, Longzhu Liu1, Lianyu Xu1, Ming Liu1, Mingsheng Liao1 
TL;DR: A novel approach for the reconstruction of super-resolution (SR) synthetic aperture radar (SAR) images in the compressed sensing (CS) theory framework using a framework that combines CS with a multi-dictionary is presented.
Abstract: This paper presents a novel approach for the reconstruction of super-resolution (SR) synthetic aperture radar (SAR) images in the compressed sensing (CS) theory framework. Recent research has shown that super-resolved data can be reconstructed from an extremely small set of measurements compared to that currently required. Therefore, a CS to produce SAR super-resolution images is introduced in the present work. The proposed approach contributes in three ways. First, enhanced SR results are achieved using a framework that combines CS with a multi-dictionary. Then, the multi-dictionary pairs are trained after classifying the training images through a sparse coding spatial pyramid machine. Each dictionary pair containing low- and high-resolution dictionaries are jointly trained. Finally, the gradient-descent optimization approach is applied to decrease the mutual coherence between the measurement matrix and the representation basis. The CS reconstruction effect is related to incoherence. The effectiveness of this method is demonstrated on TerraSAR-X data.

61 citations

Journal ArticleDOI
TL;DR: A novel web enabled disease detection system (WEDDS) based on compressed sensing (CS) is proposed to detect and classify the diseases in leaves and statistical based thresholding strategy is proposed for segmentation of the diseased leaf.
Abstract: Plant disease detection attracts significant attention in the field of agriculture where image based disease detection plays an important role. To improve the yield of plants, it is necessary to detect the onset of diseases in plants and advice the farmers to act based on the suggestions. In this paper, a novel web enabled disease detection system (WEDDS) based on compressed sensing (CS) is proposed to detect and classify the diseases in leaves. Statistical based thresholding strategy is proposed for segmentation of the diseased leaf. CS measurements of the segmented leaf are uploaded to the cloud to reduce the storage complexity. At the monitoring site, the measurements are retrieved and the features are extracted from the reconstructed segmented image. The analysis and classification is done using support vector machine classifier. The performance of the proposed WEDDS has been evaluated in terms of accuracy and is compared with the existing techniques. The WEDDS was also evaluated experimentally using Raspberry pi 3 board. The results show that the proposed method provides an overall detection accuracy of 98.5% and classification accuracy of 98.4%.

61 citations

Journal ArticleDOI
TL;DR: Empirically evaluate a recently proposed Fast Approximate Discrete Fourier Transform (FADFT) algorithm, FADFT-2, for the first time and it is shown that FAD FT-2 not only generally outperforms F ADFT-1 on all but the sparsest signals, but is also significantly faster than FFTW 3.1 on large sparse signals.
Abstract: In this paper we empirically evaluate a recently proposed Fast Approximate Discrete Fourier Transform (FADFT) algorithm, FADFT-2, for the first time. FADFT-2 returns approximate Fourier representations for frequency-sparse signals and works by random sampling. Its implemen- tation is benchmarked against two competing methods. The first is the popular exact FFT imple- mentation FFTW Version 3.1. The second is an implementation of FADFT-2’s ancestor, FADFT-1. Experiments verify the theoretical runtimes of both FADFT-1 and FADFT-2. In doing so it is shown that FADFT-2 not only generally outperforms FADFT-1 on all but the sparsest signals, but is also significantly faster than FFTW 3.1 on large sparse signals. Furthermore, it is demonstrated that FADFT-2 is indistinguishable from FADFT-1 in terms of noise tolerance despite FADFT-2’s better execution time.

61 citations

Journal ArticleDOI
TL;DR: This work establishes the design, analysis, and fine-tuning of a peak-to-average-power-ratio (PAPR) reducing system, based on compressed sensing at the receiver of apeak-reducing sparse clipper applied to an orthogonal frequency-division multiplexing (OFDM) signal at the transmitter.
Abstract: This work establishes the design, analysis, and fine-tuning of a peak-to-average-power-ratio (PAPR) reducing system, based on compressed sensing (CS) at the receiver of a peak-reducing sparse clipper applied to an orthogonal frequency-division multiplexing (OFDM) signal at the transmitter. By exploiting the sparsity of clipping events in the time domain relative to a predefined clipping threshold, the method depends on partially observing the frequency content of the clipping distortion over reserved tones to estimate the remaining distortion.

61 citations

References
More filters
Book
01 Jan 1983

34,729 citations

Book
D.L. Donoho1
01 Jan 2004
TL;DR: It is possible to design n=O(Nlog(m)) nonadaptive measurements allowing reconstruction with accuracy comparable to that attainable with direct knowledge of the N most important coefficients, and a good approximation to those N important coefficients is extracted from the n measurements by solving a linear program-Basis Pursuit in signal processing.
Abstract: Suppose x is an unknown vector in Ropfm (a digital image or signal); we plan to measure n general linear functionals of x and then reconstruct. If x is known to be compressible by transform coding with a known transform, and we reconstruct via the nonlinear procedure defined here, the number of measurements n can be dramatically smaller than the size m. Thus, certain natural classes of images with m pixels need only n=O(m1/4log5/2(m)) nonadaptive nonpixel samples for faithful recovery, as opposed to the usual m pixel samples. More specifically, suppose x has a sparse representation in some orthonormal basis (e.g., wavelet, Fourier) or tight frame (e.g., curvelet, Gabor)-so the coefficients belong to an lscrp ball for 0

18,609 citations

Journal ArticleDOI
TL;DR: Basis Pursuit (BP) is a principle for decomposing a signal into an "optimal" superposition of dictionary elements, where optimal means having the smallest l1 norm of coefficients among all such decompositions.
Abstract: The time-frequency and time-scale communities have recently developed a large number of overcomplete waveform dictionaries --- stationary wavelets, wavelet packets, cosine packets, chirplets, and warplets, to name a few. Decomposition into overcomplete systems is not unique, and several methods for decomposition have been proposed, including the method of frames (MOF), Matching pursuit (MP), and, for special dictionaries, the best orthogonal basis (BOB). Basis Pursuit (BP) is a principle for decomposing a signal into an "optimal" superposition of dictionary elements, where optimal means having the smallest l1 norm of coefficients among all such decompositions. We give examples exhibiting several advantages over MOF, MP, and BOB, including better sparsity and superresolution. BP has interesting relations to ideas in areas as diverse as ill-posed problems, in abstract harmonic analysis, total variation denoising, and multiscale edge denoising. BP in highly overcomplete dictionaries leads to large-scale optimization problems. With signals of length 8192 and a wavelet packet dictionary, one gets an equivalent linear program of size 8192 by 212,992. Such problems can be attacked successfully only because of recent advances in linear programming by interior-point methods. We obtain reasonable success with a primal-dual logarithmic barrier method and conjugate-gradient solver.

9,950 citations

Journal ArticleDOI
TL;DR: The authors introduce an algorithm, called matching pursuit, that decomposes any signal into a linear expansion of waveforms that are selected from a redundant dictionary of functions, chosen in order to best match the signal structures.
Abstract: The authors introduce an algorithm, called matching pursuit, that decomposes any signal into a linear expansion of waveforms that are selected from a redundant dictionary of functions. These waveforms are chosen in order to best match the signal structures. Matching pursuits are general procedures to compute adaptive signal representations. With a dictionary of Gabor functions a matching pursuit defines an adaptive time-frequency transform. They derive a signal energy distribution in the time-frequency plane, which does not include interference terms, unlike Wigner and Cohen class distributions. A matching pursuit isolates the signal structures that are coherent with respect to a given dictionary. An application to pattern extraction from noisy signals is described. They compare a matching pursuit decomposition with a signal expansion over an optimized wavepacket orthonormal basis, selected with the algorithm of Coifman and Wickerhauser see (IEEE Trans. Informat. Theory, vol. 38, Mar. 1992). >

9,380 citations

Journal ArticleDOI
TL;DR: A publicly available algorithm that requires only the same order of magnitude of computational effort as ordinary least squares applied to the full set of covariates is described.
Abstract: The purpose of model selection algorithms such as All Subsets, Forward Selection and Backward Elimination is to choose a linear model on the basis of the same set of data to which the model will be applied. Typically we have available a large collection of possible covariates from which we hope to select a parsimonious set for the efficient prediction of a response variable. Least Angle Regression (LARS), a new model selection algorithm, is a useful and less greedy version of traditional forward selection methods. Three main properties are derived: (1) A simple modification of the LARS algorithm implements the Lasso, an attractive version of ordinary least squares that constrains the sum of the absolute regression coefficients; the LARS modification calculates all possible Lasso estimates for a given problem, using an order of magnitude less computer time than previous methods. (2) A different LARS modification efficiently implements Forward Stagewise linear regression, another promising new model selection method; this connection explains the similar numerical results previously observed for the Lasso and Stagewise, and helps us understand the properties of both methods, which are seen as constrained versions of the simpler LARS algorithm. (3) A simple approximation for the degrees of freedom of a LARS estimate is available, from which we derive a Cp estimate of prediction error; this allows a principled choice among the range of possible LARS estimates. LARS and its variants are computationally efficient: the paper describes a publicly available algorithm that requires only the same order of magnitude of computational effort as ordinary least squares applied to the full set of covariates.

7,828 citations