scispace - formally typeset
Search or ask a question

Signal Recovery from Random Measurements Via Orthogonal Matching Pursuit: The Gaussian Case

TL;DR: In this paper, a greedy algorithm called Orthogonal Matching Pursuit (OMP) was proposed to recover a signal with m nonzero entries in dimension 1 given O(m n d) random linear measurements of that signal.
Abstract: This report demonstrates theoretically and empirically that a greedy algorithm called Orthogonal Matching Pursuit (OMP) can reliably recover a signal with m nonzero entries in dimension d given O(mln d) random linear measurements of that signal. This is a massive improvement over previous results, which require O(m2) measurements. The new results for OMP are comparable with recent results for another approach called Basis Pursuit (BP). In some settings, the OMP algorithm is faster and easier to implement, so it is an attractive alternative to BP for signal recovery problems.

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
TL;DR: A novel, non‐adaptive algorithm is proposed that takes advantage of the compressibility of the light transport signal in a transform domain to capture it with less acquisitions than with standard approaches, leveraging recent work in the area of compressed sensing.
Abstract: The accurate measurement of the light transport characteristics of a complex scene is an important goal in computer graphics and has applications in relighting and dual photography. However, since the light transport data sets are typically very large, much of the previous research has focused on adaptive algorithms that capture them efficiently. In this work, we propose a novel, non-adaptive algorithm that takes advantage of the compressibility of the light transport signal in a transform domain to capture it with less acquisitions than with standard approaches. To do this, we leverage recent work in the area of compressed sensing, where a signal is reconstructed from a few samples assuming that it is sparse in a transform domain. We demonstrate our approach by performing dual photography and relighting by using a much smaller number of acquisitions than would normally be needed. Because our algorithm is not adaptive, it is also simpler to implement than many of the current approaches.

107 citations

Journal ArticleDOI
TL;DR: In this article, model reduction and compressive sensing strategies can be combined to great advantage for classifying, projecting, and reconstructing the relevant low-dimensional dynamics of complex nonlinear systems.
Abstract: We show that for complex nonlinear systems, model reduction and compressive sensing strategies can be combined to great advantage for classifying, projecting, and reconstructing the relevant low-dimensional dynamics. $\ell_2$-based dimensionality reduction methods such as the proper orthogonal decomposition are used to construct separate modal libraries and Galerkin models based on data from a number of bifurcation regimes. These libraries are then concatenated into an over-complete library, and $\ell_1$ sparse representation in this library from a few noisy measurements results in correct identification of the bifurcation regime. This technique provides an objective and general framework for classifying the bifurcation parameters, and therefore, the underlying dynamics and stability. After classifying the bifurcation regime, it is possible to employ a low-dimensional Galerkin model, only on modes relevant to that bifurcation value. These methods are demonstrated on the complex Ginzburg-Landau equation using sparse, noisy measurements. In particular, three noisy measurements are used to accurately classify and reconstruct the dynamics associated with six distinct bifurcation regimes; in contrast, classification based on least-squares fitting ($\ell_2$) fails consistently.

107 citations

Journal ArticleDOI
TL;DR: This work proposes models that contain weighted sparsity constraints in two different frames that can be sparsely approximated in expansions of suitable frames as wavelets, curvelets, wave atoms and others.
Abstract: Compressed sensing is a new concept in signal processing Assuming that a signal can be represented or approximated by only a few suitably chosen terms in a frame expansion, compressed sensing allows one to recover this signal from much fewer samples than the Shannon–Nyquist theory requires Many images can be sparsely approximated in expansions of suitable frames as wavelets, curvelets, wave atoms and others Generally, wavelets represent point-like features while curvelets represent line-like features well For a suitable recovery of images, we propose models that contain weighted sparsity constraints in two different frames Given the incomplete measurements f = Φu + ϵ with the measurement matrix Φ ∈ ℝK × N, K ≪ N, we consider a jointly sparsity-constrained optimization problem of the form ${{\rm argmin}}_{u} \{ \|\Lambda_{c} \Psi_c u \|_{1} + \|\Lambda_{w} \Psi_w u \|_{1} + \frac{1}{2} \|f - \Phi u \|_{2}^2}\}$ Here Ψc and Ψw are the transform matrices corresponding to the two frames, and the diagona

107 citations

Journal ArticleDOI
TL;DR: It is possible to define a general threshold that separates signal components from spectral noise, in the cases when some components are masked by noise, and this threshold can be iteratively updated, providing an iterative version of blind and simple compressive sensing reconstruction algorithm.

106 citations

Journal ArticleDOI
TL;DR: A novel wideband DOA estimation algorithm is proposed to simultaneously infer the band occupation and estimate high-resolution DOAs by leveraging the sparsity in the angular domain.
Abstract: Direction of arrival (DOA) estimation methods based on joint sparsity are attractive due to their superiority of high resolution with a limited number of snapshots. However, the common assumption that signals from different directions share the spectral band is inappropriate when they occupy different bands. To flexibly deal with this situation, a novel wideband DOA estimation algorithm is proposed to simultaneously infer the band occupation and estimate high-resolution DOAs by leveraging the sparsity in the angular domain. The band occupation is exploited by exerting a Dirichlet process (DP) prior over the latent parametric space. Moreover, the proposed method is extended to deal with the off-grid problem by two schemes. One applies a linear approximation to the true dictionary and infers the hidden variables and parameters by the variational Bayesian expectation-maximization (VBEM) in an integrated manner. The other is the separated scheme where DOA is refined by a postsearching procedure based on the reconstructed results. Since the proposed schemes can automatically partition the sub-bands into clusters according to their underlying occupation, more accurate DOA estimation can be achieved by using the measurements within one cluster. Results of comprehensive simulations demonstrate that the proposed schemes outperform other reported ones.

105 citations

References
More filters
Book
01 Jan 1983

34,729 citations

Book
D.L. Donoho1
01 Jan 2004
TL;DR: It is possible to design n=O(Nlog(m)) nonadaptive measurements allowing reconstruction with accuracy comparable to that attainable with direct knowledge of the N most important coefficients, and a good approximation to those N important coefficients is extracted from the n measurements by solving a linear program-Basis Pursuit in signal processing.
Abstract: Suppose x is an unknown vector in Ropfm (a digital image or signal); we plan to measure n general linear functionals of x and then reconstruct. If x is known to be compressible by transform coding with a known transform, and we reconstruct via the nonlinear procedure defined here, the number of measurements n can be dramatically smaller than the size m. Thus, certain natural classes of images with m pixels need only n=O(m1/4log5/2(m)) nonadaptive nonpixel samples for faithful recovery, as opposed to the usual m pixel samples. More specifically, suppose x has a sparse representation in some orthonormal basis (e.g., wavelet, Fourier) or tight frame (e.g., curvelet, Gabor)-so the coefficients belong to an lscrp ball for 0

18,609 citations

Journal ArticleDOI
TL;DR: Basis Pursuit (BP) is a principle for decomposing a signal into an "optimal" superposition of dictionary elements, where optimal means having the smallest l1 norm of coefficients among all such decompositions.
Abstract: The time-frequency and time-scale communities have recently developed a large number of overcomplete waveform dictionaries --- stationary wavelets, wavelet packets, cosine packets, chirplets, and warplets, to name a few. Decomposition into overcomplete systems is not unique, and several methods for decomposition have been proposed, including the method of frames (MOF), Matching pursuit (MP), and, for special dictionaries, the best orthogonal basis (BOB). Basis Pursuit (BP) is a principle for decomposing a signal into an "optimal" superposition of dictionary elements, where optimal means having the smallest l1 norm of coefficients among all such decompositions. We give examples exhibiting several advantages over MOF, MP, and BOB, including better sparsity and superresolution. BP has interesting relations to ideas in areas as diverse as ill-posed problems, in abstract harmonic analysis, total variation denoising, and multiscale edge denoising. BP in highly overcomplete dictionaries leads to large-scale optimization problems. With signals of length 8192 and a wavelet packet dictionary, one gets an equivalent linear program of size 8192 by 212,992. Such problems can be attacked successfully only because of recent advances in linear programming by interior-point methods. We obtain reasonable success with a primal-dual logarithmic barrier method and conjugate-gradient solver.

9,950 citations

Journal ArticleDOI
TL;DR: The authors introduce an algorithm, called matching pursuit, that decomposes any signal into a linear expansion of waveforms that are selected from a redundant dictionary of functions, chosen in order to best match the signal structures.
Abstract: The authors introduce an algorithm, called matching pursuit, that decomposes any signal into a linear expansion of waveforms that are selected from a redundant dictionary of functions. These waveforms are chosen in order to best match the signal structures. Matching pursuits are general procedures to compute adaptive signal representations. With a dictionary of Gabor functions a matching pursuit defines an adaptive time-frequency transform. They derive a signal energy distribution in the time-frequency plane, which does not include interference terms, unlike Wigner and Cohen class distributions. A matching pursuit isolates the signal structures that are coherent with respect to a given dictionary. An application to pattern extraction from noisy signals is described. They compare a matching pursuit decomposition with a signal expansion over an optimized wavepacket orthonormal basis, selected with the algorithm of Coifman and Wickerhauser see (IEEE Trans. Informat. Theory, vol. 38, Mar. 1992). >

9,380 citations

Journal ArticleDOI
TL;DR: A publicly available algorithm that requires only the same order of magnitude of computational effort as ordinary least squares applied to the full set of covariates is described.
Abstract: The purpose of model selection algorithms such as All Subsets, Forward Selection and Backward Elimination is to choose a linear model on the basis of the same set of data to which the model will be applied. Typically we have available a large collection of possible covariates from which we hope to select a parsimonious set for the efficient prediction of a response variable. Least Angle Regression (LARS), a new model selection algorithm, is a useful and less greedy version of traditional forward selection methods. Three main properties are derived: (1) A simple modification of the LARS algorithm implements the Lasso, an attractive version of ordinary least squares that constrains the sum of the absolute regression coefficients; the LARS modification calculates all possible Lasso estimates for a given problem, using an order of magnitude less computer time than previous methods. (2) A different LARS modification efficiently implements Forward Stagewise linear regression, another promising new model selection method; this connection explains the similar numerical results previously observed for the Lasso and Stagewise, and helps us understand the properties of both methods, which are seen as constrained versions of the simpler LARS algorithm. (3) A simple approximation for the degrees of freedom of a LARS estimate is available, from which we derive a Cp estimate of prediction error; this allows a principled choice among the range of possible LARS estimates. LARS and its variants are computationally efficient: the paper describes a publicly available algorithm that requires only the same order of magnitude of computational effort as ordinary least squares applied to the full set of covariates.

7,828 citations