scispace - formally typeset
Search or ask a question

Signal Recovery from Random Measurements Via Orthogonal Matching Pursuit: The Gaussian Case

TL;DR: In this paper, a greedy algorithm called Orthogonal Matching Pursuit (OMP) was proposed to recover a signal with m nonzero entries in dimension 1 given O(m n d) random linear measurements of that signal.
Abstract: This report demonstrates theoretically and empirically that a greedy algorithm called Orthogonal Matching Pursuit (OMP) can reliably recover a signal with m nonzero entries in dimension d given O(mln d) random linear measurements of that signal. This is a massive improvement over previous results, which require O(m2) measurements. The new results for OMP are comparable with recent results for another approach called Basis Pursuit (BP). In some settings, the OMP algorithm is faster and easier to implement, so it is an attractive alternative to BP for signal recovery problems.

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
TL;DR: This paper proposes a low-complexity framework of Privacy-Preserving Compressive Analysis (PPCA) based on subspace-based representation, which can be implemented in ECG-based atrial fibrillation detection and is 341 times fewer compared with the traditional CS-based security.
Abstract: Compressive sensing (CS) is attractive in long-term electrocardiography (ECG) telemonitoring to extend life-time for resource-constrained wireless wearable sensors. However, the availability of transmitted personal information has posed great concerns for potential privacy leakage. Moreover, the traditional CS-based security frameworks focus on secured signal recovery instead of privacy-preserving data analytics; hence, they provide only computational secrecy and have impractically high complexities for decryption. In this paper, to protect privacy from an information-theoretic perspective while delivering the classification capability, we propose a low-complexity framework of Privacy-Preserving Compressive Analysis (PPCA) based on subspace-based representation. The subspace-based dictionary is used for both encrypting and decoding the CS measurements online, and it is built by dividing signal space into discriminative and complementary subspace offline. The encrypted signal is unreconstructable even if the eavesdropper cracks the measurement matrix and the dictionary. PPCA is implemented in ECG-based atrial fibrillation detection. It can reduce the mutual information by 1.98 bits via encrypting measurements with signal-dependent noise at 1 dB, while the classification accuracy remains 96.05% with the decoding matrix. Furthermore, by decoding via matrix–vector product, rather than sparse coding, this computational complexity of PPCA is 341 times fewer compared with the traditional CS-based security.

42 citations

Book ChapterDOI
22 May 2016
TL;DR: This survey presents an overview of the field of compressive sensing, accentuating elements from approximation theory, but also highlighting connections with other disciplines that have enriched the theory, e.g., statistics, sampling theory, probability, optimization, metagenomics, graph theory, frame theory, and Banach space geometry.
Abstract: About a decade ago, a couple of groundbreaking articles revealed the possibility of faithfully recovering high-dimensional signals from some seemingly incomplete information about them. Perhaps more importantly, practical procedures to perform the recovery were also provided. These realizations had a tremendous impact in science and engineering. They gave rise to a field called ‘compressive sensing,’ which is now in a mature state and whose foundations rely on an elegant mathematical theory. This survey presents an overview of the field, accentuating elements from approximation theory, but also highlighting connections with other disciplines that have enriched the theory, e.g., statistics, sampling theory, probability, optimization, metagenomics, graph theory, frame theory, and Banach space geometry.

42 citations

Journal ArticleDOI
TL;DR: An iterative greedy reconstruction algorithm for Compressed Sensing called back-off and rectification of greedy pursuit (BRGP) is presented, which significantly outperforms conventional techniques for one-dimensional or two-dimensional compressible signals.

41 citations

Journal ArticleDOI
TL;DR: The empirical mode decomposition (EMD) and morphological wavelet transform (MWT) are utilized to gain spectral-spatial features, which can be significantly integrated by the sparse multitask learning (MTL) and provide more accurate classification results compared to the state-of-the-art techniques.
Abstract: Recently, many researchers have attempted to exploit spectral-spatial features and sparsity-based hyperspectral image classifiers for higher classification accuracy. However, challenges remain for efficient spectral-spatial feature generation and combination in the sparsity-based classifiers. This paper utilizes the empirical mode decomposition (EMD) and morphological wavelet transform (MWT) to gain spectral-spatial features, which can be significantly integrated by the sparse multitask learning (MTL). In the feature extraction step, the sum of the intrinsic mode functions extracted by an optimized EMD is taken as spectral features, whereas the spatial features are formed by the low-frequency components of one-level MWT. In the classification step, a kernel-based sparse MTL solved by the accelerated proximal gradient is applied to analyze both the spectral and spatial features simultaneously. Experiments are conducted on two benchmark data sets with different spectral and spatial resolutions. It is found that the proposed methods provide more accurate classification results compared to the state-of-the-art techniques with various ratio of training samples.

41 citations

Journal ArticleDOI
TL;DR: The experimental results and security analysis show that the algorithm has the advantages of large key space, no obvious statistical characteristics of ciphertext, sensitive to plaintext and key, and able to resist chosen-plaintext attack.
Abstract: This paper proposes a digital image compression-encryption scheme based on the theory of compressive sensing and cyclic shift, which use random Gauss matrix and sparse transform to compress the digital image, and then cyclic shift and diffusion operation are developed subsequently to the compressive sensing (CS) phase. The algorithm has three advantages: First, the measurement matrix used in the algorithm is generated by Chebyshev mapping, which increases the key space and reduces the burden of key transmission. Second, The Sigmoid function is used to transform the range of compressed data to 0~255, which are stored as 8-bit binary data, thus further reducing the amount of data transmission and avoiding the expansion of encrypted data. Third, the generation of key stream in encryption phase is related to ciphertext, there are different key streams for different plain images. Thus, our algorithm can resist against the chosen-plaintext and known-plaintext attacks effectively. In addition, the implementation of cyclic shift and diffusion operation further enhances the security of the system. Each pixel of the encrypted image is output in the form of 8-bit integer to facilitate data storage, display and transmission. The experimental results and security analysis show that the algorithm has the advantages of large key space, no obvious statistical characteristics of ciphertext, sensitive to plaintext and key, and able to resist chosen-plaintext attack.

41 citations

References
More filters
Book
01 Jan 1983

34,729 citations

Book
D.L. Donoho1
01 Jan 2004
TL;DR: It is possible to design n=O(Nlog(m)) nonadaptive measurements allowing reconstruction with accuracy comparable to that attainable with direct knowledge of the N most important coefficients, and a good approximation to those N important coefficients is extracted from the n measurements by solving a linear program-Basis Pursuit in signal processing.
Abstract: Suppose x is an unknown vector in Ropfm (a digital image or signal); we plan to measure n general linear functionals of x and then reconstruct. If x is known to be compressible by transform coding with a known transform, and we reconstruct via the nonlinear procedure defined here, the number of measurements n can be dramatically smaller than the size m. Thus, certain natural classes of images with m pixels need only n=O(m1/4log5/2(m)) nonadaptive nonpixel samples for faithful recovery, as opposed to the usual m pixel samples. More specifically, suppose x has a sparse representation in some orthonormal basis (e.g., wavelet, Fourier) or tight frame (e.g., curvelet, Gabor)-so the coefficients belong to an lscrp ball for 0

18,609 citations

Journal ArticleDOI
TL;DR: Basis Pursuit (BP) is a principle for decomposing a signal into an "optimal" superposition of dictionary elements, where optimal means having the smallest l1 norm of coefficients among all such decompositions.
Abstract: The time-frequency and time-scale communities have recently developed a large number of overcomplete waveform dictionaries --- stationary wavelets, wavelet packets, cosine packets, chirplets, and warplets, to name a few. Decomposition into overcomplete systems is not unique, and several methods for decomposition have been proposed, including the method of frames (MOF), Matching pursuit (MP), and, for special dictionaries, the best orthogonal basis (BOB). Basis Pursuit (BP) is a principle for decomposing a signal into an "optimal" superposition of dictionary elements, where optimal means having the smallest l1 norm of coefficients among all such decompositions. We give examples exhibiting several advantages over MOF, MP, and BOB, including better sparsity and superresolution. BP has interesting relations to ideas in areas as diverse as ill-posed problems, in abstract harmonic analysis, total variation denoising, and multiscale edge denoising. BP in highly overcomplete dictionaries leads to large-scale optimization problems. With signals of length 8192 and a wavelet packet dictionary, one gets an equivalent linear program of size 8192 by 212,992. Such problems can be attacked successfully only because of recent advances in linear programming by interior-point methods. We obtain reasonable success with a primal-dual logarithmic barrier method and conjugate-gradient solver.

9,950 citations

Journal ArticleDOI
TL;DR: The authors introduce an algorithm, called matching pursuit, that decomposes any signal into a linear expansion of waveforms that are selected from a redundant dictionary of functions, chosen in order to best match the signal structures.
Abstract: The authors introduce an algorithm, called matching pursuit, that decomposes any signal into a linear expansion of waveforms that are selected from a redundant dictionary of functions. These waveforms are chosen in order to best match the signal structures. Matching pursuits are general procedures to compute adaptive signal representations. With a dictionary of Gabor functions a matching pursuit defines an adaptive time-frequency transform. They derive a signal energy distribution in the time-frequency plane, which does not include interference terms, unlike Wigner and Cohen class distributions. A matching pursuit isolates the signal structures that are coherent with respect to a given dictionary. An application to pattern extraction from noisy signals is described. They compare a matching pursuit decomposition with a signal expansion over an optimized wavepacket orthonormal basis, selected with the algorithm of Coifman and Wickerhauser see (IEEE Trans. Informat. Theory, vol. 38, Mar. 1992). >

9,380 citations

Journal ArticleDOI
TL;DR: A publicly available algorithm that requires only the same order of magnitude of computational effort as ordinary least squares applied to the full set of covariates is described.
Abstract: The purpose of model selection algorithms such as All Subsets, Forward Selection and Backward Elimination is to choose a linear model on the basis of the same set of data to which the model will be applied. Typically we have available a large collection of possible covariates from which we hope to select a parsimonious set for the efficient prediction of a response variable. Least Angle Regression (LARS), a new model selection algorithm, is a useful and less greedy version of traditional forward selection methods. Three main properties are derived: (1) A simple modification of the LARS algorithm implements the Lasso, an attractive version of ordinary least squares that constrains the sum of the absolute regression coefficients; the LARS modification calculates all possible Lasso estimates for a given problem, using an order of magnitude less computer time than previous methods. (2) A different LARS modification efficiently implements Forward Stagewise linear regression, another promising new model selection method; this connection explains the similar numerical results previously observed for the Lasso and Stagewise, and helps us understand the properties of both methods, which are seen as constrained versions of the simpler LARS algorithm. (3) A simple approximation for the degrees of freedom of a LARS estimate is available, from which we derive a Cp estimate of prediction error; this allows a principled choice among the range of possible LARS estimates. LARS and its variants are computationally efficient: the paper describes a publicly available algorithm that requires only the same order of magnitude of computational effort as ordinary least squares applied to the full set of covariates.

7,828 citations