scispace - formally typeset
Search or ask a question

Signal Recovery from Random Measurements Via Orthogonal Matching Pursuit: The Gaussian Case

TL;DR: In this paper, a greedy algorithm called Orthogonal Matching Pursuit (OMP) was proposed to recover a signal with m nonzero entries in dimension 1 given O(m n d) random linear measurements of that signal.
Abstract: This report demonstrates theoretically and empirically that a greedy algorithm called Orthogonal Matching Pursuit (OMP) can reliably recover a signal with m nonzero entries in dimension d given O(mln d) random linear measurements of that signal. This is a massive improvement over previous results, which require O(m2) measurements. The new results for OMP are comparable with recent results for another approach called Basis Pursuit (BP). In some settings, the OMP algorithm is faster and easier to implement, so it is an attractive alternative to BP for signal recovery problems.

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
TL;DR: In this article, the secure transmission for an intelligent reflecting surface (IRS)-assisted millimeter-wave and terahertz (THz) system, in which a base station (BS) communicates with its destination via an IRS, in the presence of a passive eavesdropper, was investigated.
Abstract: This letter focuses on the secure transmission for an intelligent reflecting surface (IRS)-assisted millimeter-wave (mmWave) and terahertz (THz) system, in which a base station (BS) communicates with its destination via an IRS, in the presence of a passive eavesdropper To maximize the system secrecy rate, the transmit beamforming at the BS and the reflecting matrix at the IRS are jointly optimized with transmit power and discrete phase-shift constraints It is first proved that the beamforming design is independent of the phase shift design under the rank-one channel assumption The formulated non-convex problem is then converted into two subproblems, which are solved alternatively Specifically, the closed-form solution of transmit beamforming at the BS is derived, and the semidefinite programming (SDP)-based method and element-wise block coordinate descent (BCD)-based method are proposed to design the reflecting matrix The complexity of our proposed methods is analyzed theoretically Simulation results reveal that the proposed IRS-assisted secure strategy can significantly boost the secrecy rate performance, regardless of eavesdropper’s locations (near or blocking the confidential beam)

91 citations

Posted Content
TL;DR: The statistical analysis shows that the algorithms are guaranteed exact (perfect) clustering performance under certain conditions on the number of points and the affinity between subspaces, which are weaker than those considered in the standard statistical literature.
Abstract: We consider the problem of subspace clustering: given points that lie on or near the union of many low-dimensional linear subspaces, recover the subspaces. To this end, one first identifies sets of points close to the same subspace and uses the sets to estimate the subspaces. As the geometric structure of the clusters (linear subspaces) forbids proper performance of general distance based approaches such as K-means, many model-specific methods have been proposed. In this paper, we provide new simple and efficient algorithms for this problem. Our statistical analysis shows that the algorithms are guaranteed exact (perfect) clustering performance under certain conditions on the number of points and the affinity between subspaces. These conditions are weaker than those considered in the standard statistical literature. Experimental results on synthetic data generated from the standard unions of subspaces model demonstrate our theory. We also show that our algorithm performs competitively against state-of-the-art algorithms on real-world applications such as motion segmentation and face clustering, with much simpler implementation and lower computational cost.

91 citations

Journal ArticleDOI
TL;DR: Simulation results on recovery of a known dictionary and dictionary learning for natural image patches show that the new problem considerably improves performance with a little additional computational load.
Abstract: A dictionary learning problem is a matrix factorization in which the goal is to factorize a training data matrix, Y, as the product of a dictionary, D, and a sparse coefficient matrix, X, as follows, Y ≃ DX. Current dictionary learning algorithms minimize the representation error subject to a constraint on D (usually having unit column-norms) and sparseness of X. The resulting problem is not convex with respect to the pair (D,X). In this letter, we derive a first order series expansion formula for the factorization, DX. The resulting objective function is jointly convex with respect to D and X. We simply solve the resulting problem using alternating minimization and apply some of the previously suggested algorithms onto our new problem. Simulation results on recovery of a known dictionary and dictionary learning for natural image patches show that our new problem considerably improves performance with a little additional computational load.

91 citations

Journal ArticleDOI
TL;DR: A fast matching pursuit method using a Bayesian approach is introduced for sparse signal recovery and utilizes a priori statistics of additive noise and the sparsity rate of the signal, which are shown to be easily estimated from data if not available.
Abstract: A fast matching pursuit method using a Bayesian approach is introduced for sparse signal recovery. This method performs Bayesian estimates of sparse signals even when the signal prior is non-Gaussian or unknown. It is agnostic on signal statistics and utilizes a priori statistics of additive noise and the sparsity rate of the signal, which are shown to be easily estimated from data if not available. The method utilizes a greedy approach and order-recursive updates of its metrics to find the most dominant sparse supports to determine the approximate minimum mean-square error (MMSE) estimate of the sparse signal. Simulation results demonstrate the power and robustness of our proposed estimator.

90 citations

Journal ArticleDOI
TL;DR: This paper augments activity and data detection for frame-based multi-user uplink scenarios where users are (in)-active for the duration of a frame, namely, the frame-wise joint sparsity model.
Abstract: Grant-free non-orthogonal multiple access has recently gained significant attention for reducing signaling overhead in machine-type communications. In this context, compressed sensing (CS) has been identified as a good candidate for joint activity and data detection due to the inherent sparsity nature of user activity. This paper augments activity and data detection for frame-based multi-user uplink scenarios where users are (in)-active for the duration of a frame, namely, the frame-wise joint sparsity model. First, we formulate the block CS (BCS)-based sparse signal recovery framework, by fully extracting and exploiting the underlying frame-wise joint sparsity of the user activity. Then, to make explicit use of the block sparsity inherent in the equivalent block-sparse model and considering the user sparsity level to be unknown for multiuser detection, two enhanced BCS-based greedy algorithms are developed, i.e., threshold aided block sparsity adaptive subspace pursuit (TA-BSASP) and cross-validation aided block sparsity adaptive subspace pursuit (CVA-BSASP). Specifically, the proposed TA-BSASP algorithm can approach the oracle least squares (LS) performance by reasonably setting the threshold based on the additive white Gaussian noise floor. Moreover, the proposed CVA-BSASP algorithm is a highly practical algorithm design that adopts the statistical and machine learning mechanism cross-validation to determine the stopping condition of the algorithm and this does not require prior knowledge. Furthermore, the convergence and the computational complexity of the proposed algorithms are derived and the superior performance of the proposed algorithms is demonstrated by numerical experiments.

90 citations

References
More filters
Book
01 Jan 1983

34,729 citations

Book
D.L. Donoho1
01 Jan 2004
TL;DR: It is possible to design n=O(Nlog(m)) nonadaptive measurements allowing reconstruction with accuracy comparable to that attainable with direct knowledge of the N most important coefficients, and a good approximation to those N important coefficients is extracted from the n measurements by solving a linear program-Basis Pursuit in signal processing.
Abstract: Suppose x is an unknown vector in Ropfm (a digital image or signal); we plan to measure n general linear functionals of x and then reconstruct. If x is known to be compressible by transform coding with a known transform, and we reconstruct via the nonlinear procedure defined here, the number of measurements n can be dramatically smaller than the size m. Thus, certain natural classes of images with m pixels need only n=O(m1/4log5/2(m)) nonadaptive nonpixel samples for faithful recovery, as opposed to the usual m pixel samples. More specifically, suppose x has a sparse representation in some orthonormal basis (e.g., wavelet, Fourier) or tight frame (e.g., curvelet, Gabor)-so the coefficients belong to an lscrp ball for 0

18,609 citations

Journal ArticleDOI
TL;DR: Basis Pursuit (BP) is a principle for decomposing a signal into an "optimal" superposition of dictionary elements, where optimal means having the smallest l1 norm of coefficients among all such decompositions.
Abstract: The time-frequency and time-scale communities have recently developed a large number of overcomplete waveform dictionaries --- stationary wavelets, wavelet packets, cosine packets, chirplets, and warplets, to name a few. Decomposition into overcomplete systems is not unique, and several methods for decomposition have been proposed, including the method of frames (MOF), Matching pursuit (MP), and, for special dictionaries, the best orthogonal basis (BOB). Basis Pursuit (BP) is a principle for decomposing a signal into an "optimal" superposition of dictionary elements, where optimal means having the smallest l1 norm of coefficients among all such decompositions. We give examples exhibiting several advantages over MOF, MP, and BOB, including better sparsity and superresolution. BP has interesting relations to ideas in areas as diverse as ill-posed problems, in abstract harmonic analysis, total variation denoising, and multiscale edge denoising. BP in highly overcomplete dictionaries leads to large-scale optimization problems. With signals of length 8192 and a wavelet packet dictionary, one gets an equivalent linear program of size 8192 by 212,992. Such problems can be attacked successfully only because of recent advances in linear programming by interior-point methods. We obtain reasonable success with a primal-dual logarithmic barrier method and conjugate-gradient solver.

9,950 citations

Journal ArticleDOI
TL;DR: The authors introduce an algorithm, called matching pursuit, that decomposes any signal into a linear expansion of waveforms that are selected from a redundant dictionary of functions, chosen in order to best match the signal structures.
Abstract: The authors introduce an algorithm, called matching pursuit, that decomposes any signal into a linear expansion of waveforms that are selected from a redundant dictionary of functions. These waveforms are chosen in order to best match the signal structures. Matching pursuits are general procedures to compute adaptive signal representations. With a dictionary of Gabor functions a matching pursuit defines an adaptive time-frequency transform. They derive a signal energy distribution in the time-frequency plane, which does not include interference terms, unlike Wigner and Cohen class distributions. A matching pursuit isolates the signal structures that are coherent with respect to a given dictionary. An application to pattern extraction from noisy signals is described. They compare a matching pursuit decomposition with a signal expansion over an optimized wavepacket orthonormal basis, selected with the algorithm of Coifman and Wickerhauser see (IEEE Trans. Informat. Theory, vol. 38, Mar. 1992). >

9,380 citations

Journal ArticleDOI
TL;DR: A publicly available algorithm that requires only the same order of magnitude of computational effort as ordinary least squares applied to the full set of covariates is described.
Abstract: The purpose of model selection algorithms such as All Subsets, Forward Selection and Backward Elimination is to choose a linear model on the basis of the same set of data to which the model will be applied. Typically we have available a large collection of possible covariates from which we hope to select a parsimonious set for the efficient prediction of a response variable. Least Angle Regression (LARS), a new model selection algorithm, is a useful and less greedy version of traditional forward selection methods. Three main properties are derived: (1) A simple modification of the LARS algorithm implements the Lasso, an attractive version of ordinary least squares that constrains the sum of the absolute regression coefficients; the LARS modification calculates all possible Lasso estimates for a given problem, using an order of magnitude less computer time than previous methods. (2) A different LARS modification efficiently implements Forward Stagewise linear regression, another promising new model selection method; this connection explains the similar numerical results previously observed for the Lasso and Stagewise, and helps us understand the properties of both methods, which are seen as constrained versions of the simpler LARS algorithm. (3) A simple approximation for the degrees of freedom of a LARS estimate is available, from which we derive a Cp estimate of prediction error; this allows a principled choice among the range of possible LARS estimates. LARS and its variants are computationally efficient: the paper describes a publicly available algorithm that requires only the same order of magnitude of computational effort as ordinary least squares applied to the full set of covariates.

7,828 citations