scispace - formally typeset
Search or ask a question

Signal Recovery from Random Measurements Via Orthogonal Matching Pursuit: The Gaussian Case

TL;DR: In this paper, a greedy algorithm called Orthogonal Matching Pursuit (OMP) was proposed to recover a signal with m nonzero entries in dimension 1 given O(m n d) random linear measurements of that signal.
Abstract: This report demonstrates theoretically and empirically that a greedy algorithm called Orthogonal Matching Pursuit (OMP) can reliably recover a signal with m nonzero entries in dimension d given O(mln d) random linear measurements of that signal. This is a massive improvement over previous results, which require O(m2) measurements. The new results for OMP are comparable with recent results for another approach called Basis Pursuit (BP). In some settings, the OMP algorithm is faster and easier to implement, so it is an attractive alternative to BP for signal recovery problems.

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
TL;DR: An ATR approach with joint sparse representation over a locally adaptive dictionary is investigated, which guarantees high recognition accuracy by combination of more target information and adjustment of the inter-correlation information guarantee.

39 citations

02 Feb 2012
TL;DR: In this paper, the authors apply the idea of a Matching Pursuit (see Mallat and Zhang 1993) to recover a solution stepwise, at every step, the next expansion function and the corresponding weight are selected to best match the data structure.
Abstract: To recover the density of the Earth we invert Newton's gravitational potential. It is a well-known fact that this problem is ill-posed. Thus, we need to develop a regularization method to solve it appropriately. We apply the idea of a Matching Pursuit (see Mallat and Zhang 1993) to recover a solution stepwise. At every step, the next expansion function and the corresponding weight are selected to best match the data structure. One big advantage of this method is that all kinds of different functions may be taken into account to improve the solution stepwise and, thus, the sparsity of the solution may be controlled directly. Moreover, this new approach generates models with a resolution that is adapted to the data density as well as the detail density of the solution. In the numerical part of this work, we reconstruct the density distribution of the Earth. For the area of South America, we perform an extensive case study to investigate the performance and behavior of the new algorithm. Furthermore, we research the mass transport in the area of the Amazon where the proposed method shows great potential for further ecological studies, i.e. to reconstruct the mass loss of Greenland or Antarctica. However, from gravitational data alone it is only possible to recover the harmonic part of the density. To get information about the anharmonic part as well, we need to be able to include other data types, e.g. seismic data in the form of normal mode anomalies. In this work, we will perform such an inversion and present a new model of the density distribution of the whole Earth. Zur Bestimmung der Dichte der Erde invertieren wir Newtons Gravitationspotential. Da dieses Problem bekanntlich schlecht gestellt ist, entwickeln wir ein geeignetes Regularisierungsverfahren um es zu losen. Wir wenden die Idee eines Matching Pursuits (siehe Mallat und Zhang 1993) an. Das heist, wir bestimmen, der Datenstruktur bestmoglich entsprechend, die Entwicklungsfunktion und den zugehorigen Koeffizienten schrittweise. Vorteilhaft ist, dass die unterschiedlichsten Funktionen zur Entwicklung der Losung beitragen konnen. Des weiteren erhalten wir Modelle, deren Auflosung an die Datendichte und die Detailstruktur der exakten Losung angepasst ist. Wir wenden die Methode auf die Rekonstruktion der Dichteverteilung der Erde (mit Erdinnerem) an. Fur das Gebiet um Sudamerika fuhren wir eine Fallstudie durch, in der wir die Gute und das Verhalten der Methode ausfuhrlich studieren. Des weiteren untersuchen wir den Massentransport im Amazonasgebiet. Die Ergebnisse lassen erwarten, dass die Methode auch fur andere okologisch relevante Problemstellungen, wie die Untersuchung des Massenverlustes in Gronland und der Antarktis, geeignet ist. Allerdings kann man aus Gravitationsdaten nur den harmonischen Anteil der Dichte rekonstruieren. Das Einbeziehen von seismischen Daten, wie Normal Mode Anomalies, erlaubt es, auch Informationen uber den anharmonischen Anteil zu erhalten. In der vorliegenden Arbeit wird das Ergebnis einer solchen Inversion als neues Modell fur die gesamte Erde vorgestellt.

39 citations

Journal ArticleDOI
TL;DR: This paper introduces a new family of deformable models that are inspired from the compressed sensing, a technique for accurate signal reconstruction by harnessing some sparseness priors, and employs sparsity constraints to handle the outliers or gross errors.

39 citations

Journal ArticleDOI
TL;DR: The proposed method outperforms conventional colorization-based coding methods as well as the JPEG standard and is comparable with the JPEG2000 compression standard, both in terms of the compression rate and the quality of the reconstructed color image.
Abstract: In this paper, we formulate the colorization-based coding problem into an optimization problem, i.e., an L1 minimization problem. In colorization-based coding, the encoder chooses a few representative pixels (RP) for which the chrominance values and the positions are sent to the decoder, whereas in the decoder, the chrominance values for all the pixels are reconstructed by colorization methods. The main issue in colorization-based coding is how to extract the RP well therefore the compression rate and the quality of the reconstructed color image becomes good. By formulating the colorization-based coding into an L1 minimization problem, it is guaranteed that, given the colorization matrix, the chosen set of RP becomes the optimal set in the sense that it minimizes the error between the original and the reconstructed color image. In other words, for a fixed error value and a given colorization matrix, the chosen set of RP is the smallest set possible. We also propose a method to construct the colorization matrix that colorizes the image in a multiscale manner. This, combined with the proposed RP extraction method, allows us to choose a very small set of RP. It is shown experimentally that the proposed method outperforms conventional colorization-based coding methods as well as the JPEG standard and is comparable with the JPEG2000 compression standard, both in terms of the compression rate and the quality of the reconstructed color image.

39 citations

Journal ArticleDOI
TL;DR: The experiment validations have been conducted for demonstrating the effectiveness and superiority of the proposed DDL-SRC framework over the state-of-the-art dictionary learning based SRC and deep convolutional neural network methods for intelligent planet bearing fault identification.

39 citations

References
More filters
Book
01 Jan 1983

34,729 citations

Book
D.L. Donoho1
01 Jan 2004
TL;DR: It is possible to design n=O(Nlog(m)) nonadaptive measurements allowing reconstruction with accuracy comparable to that attainable with direct knowledge of the N most important coefficients, and a good approximation to those N important coefficients is extracted from the n measurements by solving a linear program-Basis Pursuit in signal processing.
Abstract: Suppose x is an unknown vector in Ropfm (a digital image or signal); we plan to measure n general linear functionals of x and then reconstruct. If x is known to be compressible by transform coding with a known transform, and we reconstruct via the nonlinear procedure defined here, the number of measurements n can be dramatically smaller than the size m. Thus, certain natural classes of images with m pixels need only n=O(m1/4log5/2(m)) nonadaptive nonpixel samples for faithful recovery, as opposed to the usual m pixel samples. More specifically, suppose x has a sparse representation in some orthonormal basis (e.g., wavelet, Fourier) or tight frame (e.g., curvelet, Gabor)-so the coefficients belong to an lscrp ball for 0

18,609 citations

Journal ArticleDOI
TL;DR: Basis Pursuit (BP) is a principle for decomposing a signal into an "optimal" superposition of dictionary elements, where optimal means having the smallest l1 norm of coefficients among all such decompositions.
Abstract: The time-frequency and time-scale communities have recently developed a large number of overcomplete waveform dictionaries --- stationary wavelets, wavelet packets, cosine packets, chirplets, and warplets, to name a few. Decomposition into overcomplete systems is not unique, and several methods for decomposition have been proposed, including the method of frames (MOF), Matching pursuit (MP), and, for special dictionaries, the best orthogonal basis (BOB). Basis Pursuit (BP) is a principle for decomposing a signal into an "optimal" superposition of dictionary elements, where optimal means having the smallest l1 norm of coefficients among all such decompositions. We give examples exhibiting several advantages over MOF, MP, and BOB, including better sparsity and superresolution. BP has interesting relations to ideas in areas as diverse as ill-posed problems, in abstract harmonic analysis, total variation denoising, and multiscale edge denoising. BP in highly overcomplete dictionaries leads to large-scale optimization problems. With signals of length 8192 and a wavelet packet dictionary, one gets an equivalent linear program of size 8192 by 212,992. Such problems can be attacked successfully only because of recent advances in linear programming by interior-point methods. We obtain reasonable success with a primal-dual logarithmic barrier method and conjugate-gradient solver.

9,950 citations

Journal ArticleDOI
TL;DR: The authors introduce an algorithm, called matching pursuit, that decomposes any signal into a linear expansion of waveforms that are selected from a redundant dictionary of functions, chosen in order to best match the signal structures.
Abstract: The authors introduce an algorithm, called matching pursuit, that decomposes any signal into a linear expansion of waveforms that are selected from a redundant dictionary of functions. These waveforms are chosen in order to best match the signal structures. Matching pursuits are general procedures to compute adaptive signal representations. With a dictionary of Gabor functions a matching pursuit defines an adaptive time-frequency transform. They derive a signal energy distribution in the time-frequency plane, which does not include interference terms, unlike Wigner and Cohen class distributions. A matching pursuit isolates the signal structures that are coherent with respect to a given dictionary. An application to pattern extraction from noisy signals is described. They compare a matching pursuit decomposition with a signal expansion over an optimized wavepacket orthonormal basis, selected with the algorithm of Coifman and Wickerhauser see (IEEE Trans. Informat. Theory, vol. 38, Mar. 1992). >

9,380 citations

Journal ArticleDOI
TL;DR: A publicly available algorithm that requires only the same order of magnitude of computational effort as ordinary least squares applied to the full set of covariates is described.
Abstract: The purpose of model selection algorithms such as All Subsets, Forward Selection and Backward Elimination is to choose a linear model on the basis of the same set of data to which the model will be applied. Typically we have available a large collection of possible covariates from which we hope to select a parsimonious set for the efficient prediction of a response variable. Least Angle Regression (LARS), a new model selection algorithm, is a useful and less greedy version of traditional forward selection methods. Three main properties are derived: (1) A simple modification of the LARS algorithm implements the Lasso, an attractive version of ordinary least squares that constrains the sum of the absolute regression coefficients; the LARS modification calculates all possible Lasso estimates for a given problem, using an order of magnitude less computer time than previous methods. (2) A different LARS modification efficiently implements Forward Stagewise linear regression, another promising new model selection method; this connection explains the similar numerical results previously observed for the Lasso and Stagewise, and helps us understand the properties of both methods, which are seen as constrained versions of the simpler LARS algorithm. (3) A simple approximation for the degrees of freedom of a LARS estimate is available, from which we derive a Cp estimate of prediction error; this allows a principled choice among the range of possible LARS estimates. LARS and its variants are computationally efficient: the paper describes a publicly available algorithm that requires only the same order of magnitude of computational effort as ordinary least squares applied to the full set of covariates.

7,828 citations