scispace - formally typeset
Search or ask a question

Signal Recovery from Random Measurements Via Orthogonal Matching Pursuit: The Gaussian Case

TL;DR: In this paper, a greedy algorithm called Orthogonal Matching Pursuit (OMP) was proposed to recover a signal with m nonzero entries in dimension 1 given O(m n d) random linear measurements of that signal.
Abstract: This report demonstrates theoretically and empirically that a greedy algorithm called Orthogonal Matching Pursuit (OMP) can reliably recover a signal with m nonzero entries in dimension d given O(mln d) random linear measurements of that signal. This is a massive improvement over previous results, which require O(m2) measurements. The new results for OMP are comparable with recent results for another approach called Basis Pursuit (BP). In some settings, the OMP algorithm is faster and easier to implement, so it is an attractive alternative to BP for signal recovery problems.

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
TL;DR: This letter presents a new method for the digital predistortion (DPD) of power amplifiers (PAs) based on sparse behavioral models that is synergistically integrated into the orthogonal matching pursuit algorithm to decorrelate the selected model regressors against the components still to be selected.
Abstract: This letter presents a new method for the digital predistortion (DPD) of power amplifiers (PAs) based on sparse behavioral models. The Gram–Schmidt orthogonalization is synergistically integrated into the orthogonal matching pursuit algorithm to decorrelate the selected model regressors against the components still to be selected. Experiments on a test bench based on a GaN PA driven by a 15-MHz orthogonal frequency division multiplexing signal were conducted in order to validate the algorithm. Experimental results in a DPD application and a comparison with other state-of-the-art algorithms highlight the enhancement of its pruning capabilities, reducing the number of coefficients while maintaining the performance.

41 citations

Posted Content
TL;DR: In this article, a lower bound of O(n/ε 2 ) was shown for the sparsity required in several dimensionality reducing linear maps, which is the best known lower bound in the literature.
Abstract: We give near-tight lower bounds for the sparsity required in several dimensionality reducing linear maps. First, consider the JL lemma which states that for any set of n vectors in R there is a matrix A in R^{m x d} with m = O(eps^{-2}log n) such that mapping by A preserves pairwise Euclidean distances of these n vectors up to a 1 +/- eps factor. We show that there exists a set of n vectors such that any such matrix A with at most s non-zero entries per column must have s = Omega(eps^{-1}log n/log(1/eps)) as long as m < O(n/log(1/eps)). This bound improves the lower bound of Omega(min{eps^{-2}, eps^{-1}sqrt{log_m d}}) by [Dasgupta-Kumar-Sarlos, STOC 2010], which only held against the stronger property of distributional JL, and only against a certain restricted class of distributions. Meanwhile our lower bound is against the JL lemma itself, with no restrictions. Our lower bound matches the sparse Johnson-Lindenstrauss upper bound of [Kane-Nelson, SODA 2012] up to an O(log(1/eps)) factor. Next, we show that any m x n matrix with the k-restricted isometry property (RIP) with constant distortion must have at least Omega(klog(n/k)) non-zeroes per column if the number of the rows is the optimal value m = O(klog (n/k)), and if k < n/polylog n. This improves the previous lower bound of Omega(min{k, n/m}) by [Chandar, 2010] and shows that for virtually all k it is impossible to have a sparse RIP matrix with an optimal number of rows. Lastly, we show that any oblivious distribution over subspace embedding matrices with 1 non-zero per column and preserving all distances in a d dimensional-subspace up to a constant factor with constant probability must have at least Omega(d^2) rows. This matches one of the upper bounds in [Nelson-Nguyen, 2012] and shows the impossibility of obtaining the best of both of constructions in that work, namely 1 non-zero per column and O(d) rows.

41 citations

Journal ArticleDOI
07 Aug 2018-Sensors
TL;DR: A novel single image super resolution method for infrared images by combining compressive sensing theory and deep learning is presented and it is found better results in super resolution tasks forrared images than SRCNN and ScSR.
Abstract: Super resolution methods alleviate the high cost and high difficulty in applying high resolution infrared image sensors. In this paper we present a novel single image super resolution method for infrared images by combining compressive sensing theory and deep learning. Low resolution images can be regarded as the compressed sampling results of the high resolution ones in compressive sensing. With sparsity in this theory, higher resolution images can be reconstructed. However, because of diverse level of sparsity for different images, the output contains noise and loss of high frequency information. Deep convolutional neural network provides a solution to relieve the noise and supplement some missing high frequency information. By concatenating two methods, we manage to produce better results in super resolution tasks for infrared images than SRCNN and ScSR. PSNR and SSIM values are used to quantify the performance. Applying our method to open datasets and actual infrared imaging experiments, we also find better visual results are preserved.

41 citations

Journal ArticleDOI
TL;DR: The enhanced Wiener path integral technique for determining the stochastic response of diverse dynamical systems is enhanced by exploiting recent developments in the area of sparse representations and an appropriate basis for expanding the system joint response probability density function is utilized.

41 citations

Journal ArticleDOI
TL;DR: A novel algorithm based on compressive sensing is proposed for the detection in which the moving foreground was removed from background, which has higher detection accuracy and better robustness than that of the previous algorithms.
Abstract: Video processing software is often used to delete moving objects and modify the forged regions with the information provided by the areas around them. However, few algorithms have been suggested for detecting this form of tampering. In this paper, a novel algorithm based on compressive sensing is proposed for the detection in which the moving foreground was removed from background. Firstly, the features of the difference between frames are obtained through K-SVD (k-Singular Value Decomposition), and then random projection is used to project the features into the lower-dimensional subspace which is clustered by k-means, and finally the detection results are combined to output. The experimental results show that our algorithm has higher detection accuracy and better robustness than that of the previous algorithms.

41 citations

References
More filters
Book
01 Jan 1983

34,729 citations

Book
D.L. Donoho1
01 Jan 2004
TL;DR: It is possible to design n=O(Nlog(m)) nonadaptive measurements allowing reconstruction with accuracy comparable to that attainable with direct knowledge of the N most important coefficients, and a good approximation to those N important coefficients is extracted from the n measurements by solving a linear program-Basis Pursuit in signal processing.
Abstract: Suppose x is an unknown vector in Ropfm (a digital image or signal); we plan to measure n general linear functionals of x and then reconstruct. If x is known to be compressible by transform coding with a known transform, and we reconstruct via the nonlinear procedure defined here, the number of measurements n can be dramatically smaller than the size m. Thus, certain natural classes of images with m pixels need only n=O(m1/4log5/2(m)) nonadaptive nonpixel samples for faithful recovery, as opposed to the usual m pixel samples. More specifically, suppose x has a sparse representation in some orthonormal basis (e.g., wavelet, Fourier) or tight frame (e.g., curvelet, Gabor)-so the coefficients belong to an lscrp ball for 0

18,609 citations

Journal ArticleDOI
TL;DR: Basis Pursuit (BP) is a principle for decomposing a signal into an "optimal" superposition of dictionary elements, where optimal means having the smallest l1 norm of coefficients among all such decompositions.
Abstract: The time-frequency and time-scale communities have recently developed a large number of overcomplete waveform dictionaries --- stationary wavelets, wavelet packets, cosine packets, chirplets, and warplets, to name a few. Decomposition into overcomplete systems is not unique, and several methods for decomposition have been proposed, including the method of frames (MOF), Matching pursuit (MP), and, for special dictionaries, the best orthogonal basis (BOB). Basis Pursuit (BP) is a principle for decomposing a signal into an "optimal" superposition of dictionary elements, where optimal means having the smallest l1 norm of coefficients among all such decompositions. We give examples exhibiting several advantages over MOF, MP, and BOB, including better sparsity and superresolution. BP has interesting relations to ideas in areas as diverse as ill-posed problems, in abstract harmonic analysis, total variation denoising, and multiscale edge denoising. BP in highly overcomplete dictionaries leads to large-scale optimization problems. With signals of length 8192 and a wavelet packet dictionary, one gets an equivalent linear program of size 8192 by 212,992. Such problems can be attacked successfully only because of recent advances in linear programming by interior-point methods. We obtain reasonable success with a primal-dual logarithmic barrier method and conjugate-gradient solver.

9,950 citations

Journal ArticleDOI
TL;DR: The authors introduce an algorithm, called matching pursuit, that decomposes any signal into a linear expansion of waveforms that are selected from a redundant dictionary of functions, chosen in order to best match the signal structures.
Abstract: The authors introduce an algorithm, called matching pursuit, that decomposes any signal into a linear expansion of waveforms that are selected from a redundant dictionary of functions. These waveforms are chosen in order to best match the signal structures. Matching pursuits are general procedures to compute adaptive signal representations. With a dictionary of Gabor functions a matching pursuit defines an adaptive time-frequency transform. They derive a signal energy distribution in the time-frequency plane, which does not include interference terms, unlike Wigner and Cohen class distributions. A matching pursuit isolates the signal structures that are coherent with respect to a given dictionary. An application to pattern extraction from noisy signals is described. They compare a matching pursuit decomposition with a signal expansion over an optimized wavepacket orthonormal basis, selected with the algorithm of Coifman and Wickerhauser see (IEEE Trans. Informat. Theory, vol. 38, Mar. 1992). >

9,380 citations

Journal ArticleDOI
TL;DR: A publicly available algorithm that requires only the same order of magnitude of computational effort as ordinary least squares applied to the full set of covariates is described.
Abstract: The purpose of model selection algorithms such as All Subsets, Forward Selection and Backward Elimination is to choose a linear model on the basis of the same set of data to which the model will be applied. Typically we have available a large collection of possible covariates from which we hope to select a parsimonious set for the efficient prediction of a response variable. Least Angle Regression (LARS), a new model selection algorithm, is a useful and less greedy version of traditional forward selection methods. Three main properties are derived: (1) A simple modification of the LARS algorithm implements the Lasso, an attractive version of ordinary least squares that constrains the sum of the absolute regression coefficients; the LARS modification calculates all possible Lasso estimates for a given problem, using an order of magnitude less computer time than previous methods. (2) A different LARS modification efficiently implements Forward Stagewise linear regression, another promising new model selection method; this connection explains the similar numerical results previously observed for the Lasso and Stagewise, and helps us understand the properties of both methods, which are seen as constrained versions of the simpler LARS algorithm. (3) A simple approximation for the degrees of freedom of a LARS estimate is available, from which we derive a Cp estimate of prediction error; this allows a principled choice among the range of possible LARS estimates. LARS and its variants are computationally efficient: the paper describes a publicly available algorithm that requires only the same order of magnitude of computational effort as ordinary least squares applied to the full set of covariates.

7,828 citations