scispace - formally typeset
Search or ask a question

Signal Recovery from Random Measurements Via Orthogonal Matching Pursuit: The Gaussian Case

TL;DR: In this paper, a greedy algorithm called Orthogonal Matching Pursuit (OMP) was proposed to recover a signal with m nonzero entries in dimension 1 given O(m n d) random linear measurements of that signal.
Abstract: This report demonstrates theoretically and empirically that a greedy algorithm called Orthogonal Matching Pursuit (OMP) can reliably recover a signal with m nonzero entries in dimension d given O(mln d) random linear measurements of that signal. This is a massive improvement over previous results, which require O(m2) measurements. The new results for OMP are comparable with recent results for another approach called Basis Pursuit (BP). In some settings, the OMP algorithm is faster and easier to implement, so it is an attractive alternative to BP for signal recovery problems.

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
TL;DR: This paper considers the problem of learning an unknown (overcomplete) basis from data that are generated from unknown and sparse linear combinations and employs a combination of the original Neural Gas algorithm and Oja's rule to learn a simple sparse code that represents each training sample by only one scaled basis vector.

59 citations

Proceedings ArticleDOI
10 Jun 2009
TL;DR: A new framework is proposed that allows the nodes to build a map of the parameter of interest with a small number of measurements and enables a novel non-invasive approach to mapping obstacles by using wireless channel measurements.
Abstract: In this paper we consider a mobile cooperative network that is tasked with building a map of the spatial variations of a parameter of interest, such as an obstacle map or an aerial map. We propose a new framework that allows the nodes to build a map of the parameter of interest with a small number of measurements. By using the recent results in the area of compressive sensing, we show how the nodes can exploit the sparse representation of the parameter of interest in the transform domain in order to build a map with minimal sensing. The proposed work allows the nodes to efficiently map the areas that are not sensed directly. To illustrate the performance of the proposed framework, we show how the nodes can build an aerial map or a map of obstacles with sparse sensing. We furthermore show how our proposed framework enables a novel non-invasive approach to mapping obstacles by using wireless channel measurements.

59 citations

Journal ArticleDOI
TL;DR: This paper assesses the extant literature that has aimed to incorporate CS in IoT applications, highlights emerging trends and identifies several avenues for future CS-based IoT research.
Abstract: The Internet of Things (IoT) holds great promises to provide an edge cutting technology that enables numerous innovative services related to healthcare, manufacturing, smart cities and various human daily activities. In a typical IoT scenario, a large number of self-powered smart devices collect real-world data and communicate with each other and with the cloud through a wireless link in order to exchange information and to provide specific services. However, the high energy consumption associated with the wireless transmission limits the performance of these IoT self-powered devices in terms of computation abilities and battery lifetime. Thus, to optimize data transmission, different approaches have to be explored such as cooperative transmission, multi-hop network architectures and sophisticated compression techniques. For the latter, compressive sensing (CS) is a very attractive paradigm to be incorporated in the design of IoT platforms. CS is a novel signal acquisition and compression theory that exploits the sparsity behavior of most natural signals and IoT architectures to achieve power-efficient, real-time platforms that can grant efficient IoT applications. This paper assesses the extant literature that has aimed to incorporate CS in IoT applications. Moreover, the paper highlights emerging trends and identifies several avenues for future CS-based IoT research.

59 citations

Journal ArticleDOI
TL;DR: A mathematical model for lapped block reconstructions in CASSI with O(KB4L) complexity per GPSR iteration where B≪N is the block size is presented, allowing the independent recovery of smaller overlapping blocks spanning the measurement set.
Abstract: The coded aperture snapshot spectral imager (CASSI) senses the spatial and spectral information of a scene using a set of K random projections of the scene onto focal plane array measurements. The reconstruction of the underlying three-dimensional (3D) scene is then obtained by l1 norm-based inverse optimization algorithms such as the gradient projections for sparse reconstruction (GPSR). The computational complexity of the inverse problem in this case grows with order O(KN4L) per iteration, where N2 and L are the spatial and spectral dimensions of the scene, respectively. In some applications the computational complexity becomes overwhelming since reconstructions can take up to several hours in desktop architectures. This paper presents a mathematical model for lapped block reconstructions in CASSI with O(KB4L) complexity per GPSR iteration where B≪N is the block size. The approach takes advantage of the structure of the sensing matrix thus allowing the independent recovery of smaller overlapping blocks spanning the measurement set. The reconstructed 3D lapped parallelepipeds are then merged to reduce the block-artifacts in the reconstructed scenes. The full data cube is reconstructed with complexity O(K(N4/(N′)2)L), per iteration, where N′=⌊N/B⌋. Simulations show the benefits of the new model as data cube reconstruction can be accelerated by an order of magnitude. Furthermore, the lapped block reconstructions lead to comparable or higher image reconstruction quality.

59 citations

01 Jul 2013
TL;DR: In a previous work as discussed by the authors, we have presented a survey of the U.S. Defense Advanced Research Projects Agency's Information Processing Techniques Office (IPT) tools and technologies.
Abstract: United States. Defense Advanced Research Projects Agency. Information Processing Techniques Office

59 citations

References
More filters
Book
01 Jan 1983

34,729 citations

Book
D.L. Donoho1
01 Jan 2004
TL;DR: It is possible to design n=O(Nlog(m)) nonadaptive measurements allowing reconstruction with accuracy comparable to that attainable with direct knowledge of the N most important coefficients, and a good approximation to those N important coefficients is extracted from the n measurements by solving a linear program-Basis Pursuit in signal processing.
Abstract: Suppose x is an unknown vector in Ropfm (a digital image or signal); we plan to measure n general linear functionals of x and then reconstruct. If x is known to be compressible by transform coding with a known transform, and we reconstruct via the nonlinear procedure defined here, the number of measurements n can be dramatically smaller than the size m. Thus, certain natural classes of images with m pixels need only n=O(m1/4log5/2(m)) nonadaptive nonpixel samples for faithful recovery, as opposed to the usual m pixel samples. More specifically, suppose x has a sparse representation in some orthonormal basis (e.g., wavelet, Fourier) or tight frame (e.g., curvelet, Gabor)-so the coefficients belong to an lscrp ball for 0

18,609 citations

Journal ArticleDOI
TL;DR: Basis Pursuit (BP) is a principle for decomposing a signal into an "optimal" superposition of dictionary elements, where optimal means having the smallest l1 norm of coefficients among all such decompositions.
Abstract: The time-frequency and time-scale communities have recently developed a large number of overcomplete waveform dictionaries --- stationary wavelets, wavelet packets, cosine packets, chirplets, and warplets, to name a few. Decomposition into overcomplete systems is not unique, and several methods for decomposition have been proposed, including the method of frames (MOF), Matching pursuit (MP), and, for special dictionaries, the best orthogonal basis (BOB). Basis Pursuit (BP) is a principle for decomposing a signal into an "optimal" superposition of dictionary elements, where optimal means having the smallest l1 norm of coefficients among all such decompositions. We give examples exhibiting several advantages over MOF, MP, and BOB, including better sparsity and superresolution. BP has interesting relations to ideas in areas as diverse as ill-posed problems, in abstract harmonic analysis, total variation denoising, and multiscale edge denoising. BP in highly overcomplete dictionaries leads to large-scale optimization problems. With signals of length 8192 and a wavelet packet dictionary, one gets an equivalent linear program of size 8192 by 212,992. Such problems can be attacked successfully only because of recent advances in linear programming by interior-point methods. We obtain reasonable success with a primal-dual logarithmic barrier method and conjugate-gradient solver.

9,950 citations

Journal ArticleDOI
TL;DR: The authors introduce an algorithm, called matching pursuit, that decomposes any signal into a linear expansion of waveforms that are selected from a redundant dictionary of functions, chosen in order to best match the signal structures.
Abstract: The authors introduce an algorithm, called matching pursuit, that decomposes any signal into a linear expansion of waveforms that are selected from a redundant dictionary of functions. These waveforms are chosen in order to best match the signal structures. Matching pursuits are general procedures to compute adaptive signal representations. With a dictionary of Gabor functions a matching pursuit defines an adaptive time-frequency transform. They derive a signal energy distribution in the time-frequency plane, which does not include interference terms, unlike Wigner and Cohen class distributions. A matching pursuit isolates the signal structures that are coherent with respect to a given dictionary. An application to pattern extraction from noisy signals is described. They compare a matching pursuit decomposition with a signal expansion over an optimized wavepacket orthonormal basis, selected with the algorithm of Coifman and Wickerhauser see (IEEE Trans. Informat. Theory, vol. 38, Mar. 1992). >

9,380 citations

Journal ArticleDOI
TL;DR: A publicly available algorithm that requires only the same order of magnitude of computational effort as ordinary least squares applied to the full set of covariates is described.
Abstract: The purpose of model selection algorithms such as All Subsets, Forward Selection and Backward Elimination is to choose a linear model on the basis of the same set of data to which the model will be applied. Typically we have available a large collection of possible covariates from which we hope to select a parsimonious set for the efficient prediction of a response variable. Least Angle Regression (LARS), a new model selection algorithm, is a useful and less greedy version of traditional forward selection methods. Three main properties are derived: (1) A simple modification of the LARS algorithm implements the Lasso, an attractive version of ordinary least squares that constrains the sum of the absolute regression coefficients; the LARS modification calculates all possible Lasso estimates for a given problem, using an order of magnitude less computer time than previous methods. (2) A different LARS modification efficiently implements Forward Stagewise linear regression, another promising new model selection method; this connection explains the similar numerical results previously observed for the Lasso and Stagewise, and helps us understand the properties of both methods, which are seen as constrained versions of the simpler LARS algorithm. (3) A simple approximation for the degrees of freedom of a LARS estimate is available, from which we derive a Cp estimate of prediction error; this allows a principled choice among the range of possible LARS estimates. LARS and its variants are computationally efficient: the paper describes a publicly available algorithm that requires only the same order of magnitude of computational effort as ordinary least squares applied to the full set of covariates.

7,828 citations