Signal Recovery from Random Measurements Via Orthogonal Matching Pursuit: The Gaussian Case
01 Aug 2007-
TL;DR: In this paper, a greedy algorithm called Orthogonal Matching Pursuit (OMP) was proposed to recover a signal with m nonzero entries in dimension 1 given O(m n d) random linear measurements of that signal.
Abstract: This report demonstrates theoretically and empirically that a greedy algorithm called
Orthogonal Matching Pursuit (OMP) can reliably recover a signal with m nonzero entries in dimension
d given O(mln d) random linear measurements of that signal. This is a massive improvement
over previous results, which require O(m2) measurements. The new results for OMP are comparable
with recent results for another approach called Basis Pursuit (BP). In some settings, the
OMP algorithm is faster and easier to implement, so it is an attractive alternative to BP for signal
recovery problems.
Citations
More filters
••
TL;DR: This paper considers the hybrid precoder design as a constant modulus constrained matrix factorization (CMCMF) problem for the most common types of hybrid architectures namely, the fully and the partially connected ones and proposes two lines of algorithms based on the majorization-minimization (MM) and the minorization-maximization framework.
Abstract: Hybrid analog-digital (A/D) transceivers are an appealing solution to reduce the transceiver hardware complexity and power consumption for the millimeter wave (mmWave) communication and more general large-scale antenna array (LSAA) systems. In contrast to fully digital conventional multiple-input-multiple-output (MIMO) systems, the baseband precoding operation splits into a lower-dimensional digital precoder followed by a network of analog phase shifters. In this paper, we consider the hybrid precoder design as a constant modulus constrained matrix factorization (CMCMF) problem for the most common types of hybrid architectures namely, the fully and the partially connected ones. Two lines of algorithms based on the majorization-minimization (MM) and the minorization-maximization framework, respectively are proposed for these architectures. In particular, we present efficient algorithms scalable for LSAA systems with provable convergence guarantees to a stationary point. We also consider the hybrid postcoder design at the receiver end. Simulation results demonstrate that the proposed algorithms converge faster to a stationary point as compared to the state-of-the-art solutions that exist in literature. Furthermore, the solution tailored for the partially connected case achieves significantly improved performance in terms of the system spectral efficiency when compared to the existing solutions.
47 citations
••
TL;DR: By invoking a fundamental property for smooth functions, a relaxed evidence lower bound is obtained (relaxed-ELBO) that is computationally more amiable than the conventional ELBO used by sparse Bayesian learning and leads to a computationally efficient inverse-free sparseBayesian learning algorithm.
Abstract: Sparse Beyesian learning is a popular approach for sparse signal recovery, and has demonstrated superior performance in a series of experiments. Nevertheless, the sparse Bayesian learning algorithm involves a matrix inverse at each iteration. Its associated computational complexity grows significantly with the problem size, which hinders its application to many practical problems even with moderately large datasets. To address this issue, in this letter, we develop a fast inverse-free sparse Bayesian learning method. Specifically, by invoking a fundamental property for smooth functions, we obtain a relaxed evidence lower bound (relaxed-ELBO) that is computationally more amiable than the conventional ELBO used by sparse Bayesian learning. A variational expectation-maximization (EM) scheme is then employed to maximize the relaxed-ELBO, which leads to a computationally efficient inverse-free sparse Bayesian learning algorithm. Simulation results show that the proposed algorithm has a fast convergence rate and achieves lower reconstruction errors than other state-of-the-art fast sparse recovery methods in the presence of noise.
47 citations
••
01 Mar 2018TL;DR: A scheme is presented, which enhances the data privacy by the asymmetric semi-homomorphic encryption scheme, and reduces the computation cost by sparse compressive matrix, and compensates the increasing cost caused by the homomorphic encryption.
Abstract: The compressive sensing (CS) based data collection schemes can effectively reduce the transmission cost of wireless sensor networks (WSNs) by exploring the sparsity of compressible signals. Although many recent works explained CS as a symmetric cryptosystem, CS-based data collection schemes still face security threats, due to the complex deployment environment of WSNs. In this paper, we first propose two feasible attack models for specific applications. Then, we present a se cure d ata c ollection scheme based on compressive sensing (SeDC), which enhances the data privacy by the asymmetric semi-homomorphic encryption scheme, and reduces the computation cost by sparse compressive matrix. More specifically, the asymmetric mechanism reduces the difficulty of secret key distribution and management. The homomorphic encryption allows the in-network aggregation in cipher domain, and thus enhances the security and achieves the network load balance. The sparse measurement matrix reduces both the computation cost and communication cost, which compensates the increasing cost caused by the homomorphic encryption. We also introduce a joint recovery model to improve the recovery accuracy. Experimental evaluation based on real data shows that the proposed scheme achieves a better performance compared with the most related works.
47 citations
••
TL;DR: This paper presents a new compressed sensing (CS) model, as well as the corresponding parallel reconstruction algorithm, which help to reduce the image encryption/decryption time and the quantization and diffusion operations into the system to further enhance the transmission security.
Abstract: The Internet of Things (IoT) has attracted extensive attention in the information field. Its rapid development has promoted several monitoring application domains. However, the resource constraint of sensor nodes and the security of data transmission have emerged as significant issues. In this paper, an image communication system for IoT monitoring applications is exploited to solve the above-mentioned problems simultaneously. The proposed system can satisfy the requirements of sensor nodes for low computational complexity, low-energy consumption, and low storage overhead. We also present a new compressed sensing (CS) model, as well as the corresponding parallel reconstruction algorithm, which help to reduce the image encryption/decryption time. Based on chaotic systems, we integrate the quantization and diffusion operations into the system to further enhance the transmission security. The simulations are executed to demonstrate the feasibility and the effectiveness of the proposed method. Compared with the traditional CS, our numerical results indicate that the proposed model reduces 413 ms computation time and 3.13 × 10
6
elements stored for large-scale images. Besides, we verify the flexibility and the diversity of choosing two submatrices for different-sized images. Experimental results also show the proposed system performs well in terms of security performance. Particularly the key space reaches 2253.
46 citations
••
TL;DR: This letter presents a new greedy method, called Adaptive Sparsity Matching Pursuit (ASMP), for sparse solutions of underdetermined systems with a typical/random projection matrix, which can extract information on sparsity of the target signal adaptively with a well-designed stagewise approach.
Abstract: This letter presents a new greedy method, called Adaptive Sparsity Matching Pursuit (ASMP), for sparse solutions of underdetermined systems with a typical/random projection matrix Unlike anterior greedy algorithms, ASMP can extract information on sparsity of the target signal adaptively with a well-designed stagewise approach Moreover, it takes advantage of backtracking to refine the chosen supports and the current approximation in the process With these improvements, ASMP provides even more attractive results than the state-of-the-art greedy algorithm CoSaMP without prior knowledge of the sparsity level Experiments validate the proposed algorithm works well for both noiseless signals and noisy signals, with the recovery quality often outperforming that of l1-minimization and other greedy algorithms
46 citations
References
More filters
•
[...]
TL;DR: It is possible to design n=O(Nlog(m)) nonadaptive measurements allowing reconstruction with accuracy comparable to that attainable with direct knowledge of the N most important coefficients, and a good approximation to those N important coefficients is extracted from the n measurements by solving a linear program-Basis Pursuit in signal processing.
Abstract: Suppose x is an unknown vector in Ropfm (a digital image or signal); we plan to measure n general linear functionals of x and then reconstruct. If x is known to be compressible by transform coding with a known transform, and we reconstruct via the nonlinear procedure defined here, the number of measurements n can be dramatically smaller than the size m. Thus, certain natural classes of images with m pixels need only n=O(m1/4log5/2(m)) nonadaptive nonpixel samples for faithful recovery, as opposed to the usual m pixel samples. More specifically, suppose x has a sparse representation in some orthonormal basis (e.g., wavelet, Fourier) or tight frame (e.g., curvelet, Gabor)-so the coefficients belong to an lscrp ball for 0
18,609 citations
••
TL;DR: Basis Pursuit (BP) is a principle for decomposing a signal into an "optimal" superposition of dictionary elements, where optimal means having the smallest l1 norm of coefficients among all such decompositions.
Abstract: The time-frequency and time-scale communities have recently developed a large number of overcomplete waveform dictionaries --- stationary wavelets, wavelet packets, cosine packets, chirplets, and warplets, to name a few. Decomposition into overcomplete systems is not unique, and several methods for decomposition have been proposed, including the method of frames (MOF), Matching pursuit (MP), and, for special dictionaries, the best orthogonal basis (BOB).
Basis Pursuit (BP) is a principle for decomposing a signal into an "optimal" superposition of dictionary elements, where optimal means having the smallest l1 norm of coefficients among all such decompositions. We give examples exhibiting several advantages over MOF, MP, and BOB, including better sparsity and superresolution. BP has interesting relations to ideas in areas as diverse as ill-posed problems, in abstract harmonic analysis, total variation denoising, and multiscale edge denoising.
BP in highly overcomplete dictionaries leads to large-scale optimization problems. With signals of length 8192 and a wavelet packet dictionary, one gets an equivalent linear program of size 8192 by 212,992. Such problems can be attacked successfully only because of recent advances in linear programming by interior-point methods. We obtain reasonable success with a primal-dual logarithmic barrier method and conjugate-gradient solver.
9,950 citations
••
TL;DR: The authors introduce an algorithm, called matching pursuit, that decomposes any signal into a linear expansion of waveforms that are selected from a redundant dictionary of functions, chosen in order to best match the signal structures.
Abstract: The authors introduce an algorithm, called matching pursuit, that decomposes any signal into a linear expansion of waveforms that are selected from a redundant dictionary of functions. These waveforms are chosen in order to best match the signal structures. Matching pursuits are general procedures to compute adaptive signal representations. With a dictionary of Gabor functions a matching pursuit defines an adaptive time-frequency transform. They derive a signal energy distribution in the time-frequency plane, which does not include interference terms, unlike Wigner and Cohen class distributions. A matching pursuit isolates the signal structures that are coherent with respect to a given dictionary. An application to pattern extraction from noisy signals is described. They compare a matching pursuit decomposition with a signal expansion over an optimized wavepacket orthonormal basis, selected with the algorithm of Coifman and Wickerhauser see (IEEE Trans. Informat. Theory, vol. 38, Mar. 1992). >
9,380 citations
••
Stanford University1, Cleveland Clinic2, University of Toronto3, Centre national de la recherche scientifique4, Université Paris-Saclay5, University of Paris-Sud6, Avaya7, Rutgers University8, RAND Corporation9, IBM10, University of Pennsylvania11, University of Western Australia12, University of Minnesota13
TL;DR: A publicly available algorithm that requires only the same order of magnitude of computational effort as ordinary least squares applied to the full set of covariates is described.
Abstract: The purpose of model selection algorithms such as All Subsets, Forward Selection and Backward Elimination is to choose a linear model on the basis of the same set of data to which the model will be applied. Typically we have available a large collection of possible covariates from which we hope to select a parsimonious set for the efficient prediction of a response variable. Least Angle Regression (LARS), a new model selection algorithm, is a useful and less greedy version of traditional forward selection methods. Three main properties are derived: (1) A simple modification of the LARS algorithm implements the Lasso, an attractive version of ordinary least squares that constrains the sum of the absolute regression coefficients; the LARS modification calculates all possible Lasso estimates for a given problem, using an order of magnitude less computer time than previous methods. (2) A different LARS modification efficiently implements Forward Stagewise linear regression, another promising new model selection method; this connection explains the similar numerical results previously observed for the Lasso and Stagewise, and helps us understand the properties of both methods, which are seen as constrained versions of the simpler LARS algorithm. (3) A simple approximation for the degrees of freedom of a LARS estimate is available, from which we derive a Cp estimate of prediction error; this allows a principled choice among the range of possible LARS estimates. LARS and its variants are computationally efficient: the paper describes a publicly available algorithm that requires only the same order of magnitude of computational effort as ordinary least squares applied to the full set of covariates.
7,828 citations