scispace - formally typeset
Search or ask a question

Signal Recovery from Random Measurements Via Orthogonal Matching Pursuit: The Gaussian Case

TL;DR: In this paper, a greedy algorithm called Orthogonal Matching Pursuit (OMP) was proposed to recover a signal with m nonzero entries in dimension 1 given O(m n d) random linear measurements of that signal.
Abstract: This report demonstrates theoretically and empirically that a greedy algorithm called Orthogonal Matching Pursuit (OMP) can reliably recover a signal with m nonzero entries in dimension d given O(mln d) random linear measurements of that signal. This is a massive improvement over previous results, which require O(m2) measurements. The new results for OMP are comparable with recent results for another approach called Basis Pursuit (BP). In some settings, the OMP algorithm is faster and easier to implement, so it is an attractive alternative to BP for signal recovery problems.

Content maybe subject to copyright    Report

Citations
More filters
Proceedings ArticleDOI
Rahul Garg1, Rohit Khandekar1
14 Jun 2009
TL;DR: The Matlab implementation of GraDeS (Gradient Descent with Sparsification) outperforms previously proposed algorithms like Subspace Pursuit, StOMP, OMP, and Lasso by an order of magnitude and uncovered cases where L1-regularized regression (Lasso) fails but GraDeS finds the correct solution.
Abstract: We present an algorithm for finding an s-sparse vector x that minimizes the square-error ∥y -- Φx∥2 where Φ satisfies the restricted isometry property (RIP), with isometric constant δ2s 1 and Hs sets all but s largest magnitude coordinates to zero. GraDeS converges to the correct solution in constant number of iterations. The condition δ2s

207 citations

Journal ArticleDOI
TL;DR: K-sparse auto encoder was used for unsupervised feature learning and a manifold was learned from normal-dose images and the distance between the reconstructed image and the manifold was minimized along with data fidelity during reconstruction.
Abstract: Dose reduction in computed tomography (CT) is essential for decreasing radiation risk in clinical applications. Iterative reconstruction algorithms are one of the most promising way to compensate for the increased noise due to reduction of photon flux. Most iterative reconstruction algorithms incorporate manually designed prior functions of the reconstructed image to suppress noises while maintaining structures of the image. These priors basically rely on smoothness constraints and cannot exploit more complex features of the image. The recent development of artificial neural networks and machine learning enabled learning of more complex features of image, which has the potential to improve reconstruction quality. In this letter, K-sparse auto encoder was used for unsupervised feature learning. A manifold was learned from normal-dose images and the distance between the reconstructed image and the manifold was minimized along with data fidelity during reconstruction. Experiments on 2016 Low-dose CT Grand Challenge were used for the method verification, and results demonstrated the noise reduction and detail preservation abilities of the proposed method.

206 citations

Posted Content
TL;DR: This paper proposes a novel channel estimation protocol for the RIS aided multi-user multi-input multi-output (MIMO) system to estimate the cascade channel, which consists of the channels from the base station to the RIS and from the RIS to the user.
Abstract: Channel acquisition is one of the main challenges for the deployment of reconfigurable intelligent surface (RIS) aided communication system. This is because RIS has a large number of reflective elements, which are passive devices without active transmitting/receiving and signal processing abilities. In this paper, we study the uplink channel estimation for the RIS aided multi-user multi-input multi-output (MIMO) system. Specifically, we propose a novel channel estimation protocol for the above system to estimate the cascade channel, which consists of the channels from the base station (BS) to the RIS and from the RIS to the user. Further, we recognize the cascaded channels are typically sparse, this allows us to formulate the channel estimation problem into a sparse channel matrix recovery problem using the compressive sensing (CS) technique, with which we can achieve robust channel estimation with limited training overhead. In particular, the sparse channel matrixes of the cascaded channels of all users have a common row-column-block sparsity structure due to the common channel between BS and RIS. By considering such a common sparsity, we further propose a two-step procedure based multi-user joint channel estimator. In the first step, by considering common column-block sparsity, we project the signal into the common column subspace for reducing complexity, quantization error, and noise level. In the second step, by considering common row-block sparsity, we apply all the projected signals to formulate a multi-user joint sparse matrix recovery problem, and we propose an iterative approach to solve this non-convex problem efficiently. Moreover, the optimization of the training reflection sequences at the RIS is studied to improve the estimation performance.

206 citations

Journal ArticleDOI
TL;DR: The efficacy of the proposed Gaussian mixture model (GMM)-based inversion method is demonstrated with videos reconstructed from simulated compressive video measurements, and from a realCompressive video camera.
Abstract: A Gaussian mixture model (GMM)-based algorithm is proposed for video reconstruction from temporally compressed video measurements. The GMM is used to model spatio-temporal video patches, and the reconstruction can be efficiently computed based on analytic expressions. The GMM-based inversion method benefits from online adaptive learning and parallel computation. We demonstrate the efficacy of the proposed inversion method with videos reconstructed from simulated compressive video measurements, and from a real compressive video camera. We also use the GMM as a tool to investigate adaptive video compressive sensing, i.e., adaptive rate of temporal compression.

205 citations

Journal ArticleDOI
TL;DR: This survey paper provides a detailed review of the state of the art related to the application of CS in CR communications and provides a classification of the main usage areas based on the radio parameter to be acquired by a wideband CR.
Abstract: Compressive sensing (CS) has received much attention in several fields such as digital image processing, wireless channel estimation, radar imaging, and cognitive radio (CR) communications. Out of these areas, this survey paper focuses on the application of CS in CR communications. Due to the under-utilization of the allocated radio spectrum, spectrum occupancy is usually sparse in different domains such as time, frequency, and space. Such a sparse nature of the spectrum occupancy has inspired the application of CS in CR communications. In this regard, several researchers have already applied the CS theory in various settings considering the sparsity in different domains. In this direction, this survey paper provides a detailed review of the state of the art related to the application of CS in CR communications. Starting with the basic principles and the main features of CS, it provides a classification of the main usage areas based on the radio parameter to be acquired by a wideband CR. Subsequently, we review the existing CS-related works applied to different categories such as wideband sensing, signal parameter estimation and radio environment map (REM) construction, highlighting the main benefits and the related issues. Furthermore, we present a generalized framework for constructing the REM in compressive settings. Finally, we conclude this survey paper with some suggested open research challenges and future directions.

204 citations

References
More filters
Book
01 Jan 1983

34,729 citations

Book
D.L. Donoho1
01 Jan 2004
TL;DR: It is possible to design n=O(Nlog(m)) nonadaptive measurements allowing reconstruction with accuracy comparable to that attainable with direct knowledge of the N most important coefficients, and a good approximation to those N important coefficients is extracted from the n measurements by solving a linear program-Basis Pursuit in signal processing.
Abstract: Suppose x is an unknown vector in Ropfm (a digital image or signal); we plan to measure n general linear functionals of x and then reconstruct. If x is known to be compressible by transform coding with a known transform, and we reconstruct via the nonlinear procedure defined here, the number of measurements n can be dramatically smaller than the size m. Thus, certain natural classes of images with m pixels need only n=O(m1/4log5/2(m)) nonadaptive nonpixel samples for faithful recovery, as opposed to the usual m pixel samples. More specifically, suppose x has a sparse representation in some orthonormal basis (e.g., wavelet, Fourier) or tight frame (e.g., curvelet, Gabor)-so the coefficients belong to an lscrp ball for 0

18,609 citations

Journal ArticleDOI
TL;DR: Basis Pursuit (BP) is a principle for decomposing a signal into an "optimal" superposition of dictionary elements, where optimal means having the smallest l1 norm of coefficients among all such decompositions.
Abstract: The time-frequency and time-scale communities have recently developed a large number of overcomplete waveform dictionaries --- stationary wavelets, wavelet packets, cosine packets, chirplets, and warplets, to name a few. Decomposition into overcomplete systems is not unique, and several methods for decomposition have been proposed, including the method of frames (MOF), Matching pursuit (MP), and, for special dictionaries, the best orthogonal basis (BOB). Basis Pursuit (BP) is a principle for decomposing a signal into an "optimal" superposition of dictionary elements, where optimal means having the smallest l1 norm of coefficients among all such decompositions. We give examples exhibiting several advantages over MOF, MP, and BOB, including better sparsity and superresolution. BP has interesting relations to ideas in areas as diverse as ill-posed problems, in abstract harmonic analysis, total variation denoising, and multiscale edge denoising. BP in highly overcomplete dictionaries leads to large-scale optimization problems. With signals of length 8192 and a wavelet packet dictionary, one gets an equivalent linear program of size 8192 by 212,992. Such problems can be attacked successfully only because of recent advances in linear programming by interior-point methods. We obtain reasonable success with a primal-dual logarithmic barrier method and conjugate-gradient solver.

9,950 citations

Journal ArticleDOI
TL;DR: The authors introduce an algorithm, called matching pursuit, that decomposes any signal into a linear expansion of waveforms that are selected from a redundant dictionary of functions, chosen in order to best match the signal structures.
Abstract: The authors introduce an algorithm, called matching pursuit, that decomposes any signal into a linear expansion of waveforms that are selected from a redundant dictionary of functions. These waveforms are chosen in order to best match the signal structures. Matching pursuits are general procedures to compute adaptive signal representations. With a dictionary of Gabor functions a matching pursuit defines an adaptive time-frequency transform. They derive a signal energy distribution in the time-frequency plane, which does not include interference terms, unlike Wigner and Cohen class distributions. A matching pursuit isolates the signal structures that are coherent with respect to a given dictionary. An application to pattern extraction from noisy signals is described. They compare a matching pursuit decomposition with a signal expansion over an optimized wavepacket orthonormal basis, selected with the algorithm of Coifman and Wickerhauser see (IEEE Trans. Informat. Theory, vol. 38, Mar. 1992). >

9,380 citations

Journal ArticleDOI
TL;DR: A publicly available algorithm that requires only the same order of magnitude of computational effort as ordinary least squares applied to the full set of covariates is described.
Abstract: The purpose of model selection algorithms such as All Subsets, Forward Selection and Backward Elimination is to choose a linear model on the basis of the same set of data to which the model will be applied. Typically we have available a large collection of possible covariates from which we hope to select a parsimonious set for the efficient prediction of a response variable. Least Angle Regression (LARS), a new model selection algorithm, is a useful and less greedy version of traditional forward selection methods. Three main properties are derived: (1) A simple modification of the LARS algorithm implements the Lasso, an attractive version of ordinary least squares that constrains the sum of the absolute regression coefficients; the LARS modification calculates all possible Lasso estimates for a given problem, using an order of magnitude less computer time than previous methods. (2) A different LARS modification efficiently implements Forward Stagewise linear regression, another promising new model selection method; this connection explains the similar numerical results previously observed for the Lasso and Stagewise, and helps us understand the properties of both methods, which are seen as constrained versions of the simpler LARS algorithm. (3) A simple approximation for the degrees of freedom of a LARS estimate is available, from which we derive a Cp estimate of prediction error; this allows a principled choice among the range of possible LARS estimates. LARS and its variants are computationally efficient: the paper describes a publicly available algorithm that requires only the same order of magnitude of computational effort as ordinary least squares applied to the full set of covariates.

7,828 citations