scispace - formally typeset
Search or ask a question

Showing papers on "Compressed sensing published in 2009"


Journal ArticleDOI
TL;DR: A new iterative recovery algorithm called CoSaMP is described that delivers the same guarantees as the best optimization-based approaches and offers rigorous bounds on computational cost and storage.

3,970 citations


Journal ArticleDOI
TL;DR: A simple costless modification to iterative thresholding is introduced making the sparsity–undersampling tradeoff of the new algorithms equivalent to that of the corresponding convex optimization procedures, inspired by belief propagation in graphical models.
Abstract: Compressed sensing aims to undersample certain high-dimensional signals yet accurately reconstruct them by exploiting signal characteristics. Accurate reconstruction is possible when the object to be recovered is sufficiently sparse in a known basis. Currently, the best known sparsity–undersampling tradeoff is achieved when reconstructing by convex optimization, which is expensive in important large-scale applications. Fast iterative thresholding algorithms have been intensively studied as alternatives to convex optimization for large-scale problems. Unfortunately known fast algorithms offer substantially worse sparsity–undersampling tradeoffs than convex optimization. We introduce a simple costless modification to iterative thresholding making the sparsity–undersampling tradeoff of the new algorithms equivalent to that of the corresponding convex optimization procedures. The new iterative-thresholding algorithms are inspired by belief propagation in graphical models. Our empirical measurements of the sparsity–undersampling tradeoff for the new algorithms agree with theoretical calculations. We show that a state evolution formalism correctly derives the true sparsity–undersampling tradeoff. There is a surprising agreement between earlier calculations based on random convex polytopes and this apparently very different theoretical formalism.

2,412 citations


Journal ArticleDOI
TL;DR: The presented analysis shows that in the noiseless setting, the proposed algorithm can exactly reconstruct arbitrary sparse signals provided that the sensing matrix satisfies the restricted isometry property with a constant parameter.
Abstract: We propose a new method for reconstruction of sparse signals with and without noisy perturbations, termed the subspace pursuit algorithm. The algorithm has two important characteristics: low computational complexity, comparable to that of orthogonal matching pursuit techniques when applied to very sparse signals, and reconstruction accuracy of the same order as that of linear programming (LP) optimization methods. The presented analysis shows that in the noiseless setting, the proposed algorithm can exactly reconstruct arbitrary sparse signals provided that the sensing matrix satisfies the restricted isometry property with a constant parameter. In the noisy setting and in the case that the signal is not exactly sparse, it can be shown that the mean-squared error of the reconstruction is upper-bounded by constant multiples of the measurement and signal perturbation energies.

2,235 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present a theoretical analysis of the iterative hard thresholding algorithm when applied to the compressed sensing recovery problem, and show that the algorithm has the following properties.

2,017 citations


Journal ArticleDOI
TL;DR: This work proposes iterative methods in which each step is obtained by solving an optimization subproblem involving a quadratic term with diagonal Hessian plus the original sparsity-inducing regularizer, and proves convergence of the proposed iterative algorithm to a minimum of the objective function.
Abstract: Finding sparse approximate solutions to large underdetermined linear systems of equations is a common problem in signal/image processing and statistics. Basis pursuit, the least absolute shrinkage and selection operator (LASSO), wavelet-based deconvolution and reconstruction, and compressed sensing (CS) are a few well-known areas in which problems of this type appear. One standard approach is to minimize an objective function that includes a quadratic (lscr 2) error term added to a sparsity-inducing (usually lscr1) regularizater. We present an algorithmic framework for the more general problem of minimizing the sum of a smooth convex function and a nonsmooth, possibly nonconvex regularizer. We propose iterative methods in which each step is obtained by solving an optimization subproblem involving a quadratic term with diagonal Hessian (i.e., separable in the unknowns) plus the original sparsity-inducing regularizer; our approach is suitable for cases in which this subproblem can be solved much more rapidly than the original problem. Under mild conditions (namely convexity of the regularizer), we prove convergence of the proposed iterative algorithm to a minimum of the objective function. In addition to solving the standard lscr2-lscr1 case, our framework yields efficient solution techniques for other regularizers, such as an lscrinfin norm and group-separable regularizers. It also generalizes immediately to the case in which the data is complex rather than real. Experiments with CS problems show that our approach is competitive with the fastest known methods for the standard lscr2-lscr1 problem, as well as being efficient on problems with other separable regularization terms.

1,723 citations


Journal ArticleDOI
TL;DR: This work analyzes the behavior of l1-constrained quadratic programming (QP), also referred to as the Lasso, for recovering the sparsity pattern of a vector beta* based on observations contaminated by noise, and establishes precise conditions on the problem dimension p, the number k of nonzero elements in beta*, and the number of observations n.
Abstract: The problem of consistently estimating the sparsity pattern of a vector beta* isin Rp based on observations contaminated by noise arises in various contexts, including signal denoising, sparse approximation, compressed sensing, and model selection. We analyze the behavior of l1-constrained quadratic programming (QP), also referred to as the Lasso, for recovering the sparsity pattern. Our main result is to establish precise conditions on the problem dimension p, the number k of nonzero elements in beta*, and the number of observations n that are necessary and sufficient for sparsity pattern recovery using the Lasso. We first analyze the case of observations made using deterministic design matrices and sub-Gaussian additive noise, and provide sufficient conditions for support recovery and linfin-error bounds, as well as results showing the necessity of incoherence and bounds on the minimum value. We then turn to the case of random designs, in which each row of the design is drawn from a N (0, Sigma) ensemble. For a broad class of Gaussian ensembles satisfying mutual incoherence conditions, we compute explicit values of thresholds 0 0, if n > 2 (thetasu + delta) klog (p- k), then the Lasso succeeds in recovering the sparsity pattern with probability converging to one for large problems, whereas for n < 2 (thetasl - delta)klog (p - k), then the probability of successful recovery converges to zero. For the special case of the uniform Gaussian ensemble (Sigma = Iptimesp), we show that thetasl = thetas

1,438 citations


Posted Content
TL;DR: In this article, the authors show that matrix completion is provably accurate even when the few observed entries are corrupted with a small amount of noise, and they also present numerical results which show that, in practice, nuclear norm minimization accurately fills in the many missing entries of large low-rank matrices from just a few noisy samples.
Abstract: On the heels of compressed sensing, a remarkable new field has very recently emerged. This field addresses a broad range of problems of significant practical interest, namely, the recovery of a data matrix from what appears to be incomplete, and perhaps even corrupted, information. In its simplest form, the problem is to recover a matrix from a small sample of its entries, and comes up in many areas of science and engineering including collaborative filtering, machine learning, control, remote sensing, and computer vision to name a few. This paper surveys the novel literature on matrix completion, which shows that under some suitable conditions, one can recover an unknown low-rank matrix from a nearly minimal set of entries by solving a simple convex optimization problem, namely, nuclear-norm minimization subject to data constraints. Further, this paper introduces novel results showing that matrix completion is provably accurate even when the few observed entries are corrupted with a small amount of noise. A typical result is that one can recover an unknown n x n matrix of low rank r from just about nr log^2 n noisy samples with an error which is proportional to the noise level. We present numerical results which complement our quantitative analysis and show that, in practice, nuclear norm minimization accurately fills in the many missing entries of large low-rank matrices from just a few noisy samples. Some analogies between matrix completion and compressed sensing are discussed throughout.

1,292 citations


Journal ArticleDOI
TL;DR: A stylized compressed sensing radar is proposed in which the time-frequency plane is discretized into an N times N grid and the techniques of compressed sensing are employed to reconstruct the target scene.
Abstract: A stylized compressed sensing radar is proposed in which the time-frequency plane is discretized into an N times N grid. Assuming the number of targets K is small (i.e., K Lt N2), then we can transmit a sufficiently ldquoincoherentrdquo pulse and employ the techniques of compressed sensing to reconstruct the target scene. A theoretical upper bound on the sparsity K is presented. Numerical simulations verify that even better performance can be achieved in practice. This novel-compressed sensing approach offers great potential for better resolution over classical radar.

1,113 citations


Journal ArticleDOI
TL;DR: A fast algorithm for overcomplete sparse decomposition, called SL0, is proposed, which tries to directly minimize the l 1 norm.
Abstract: In this paper, a fast algorithm for overcomplete sparse decomposition, called SL0, is proposed. The algorithm is essentially a method for obtaining sparse solutions of underdetermined systems of linear equations, and its applications include underdetermined sparse component analysis (SCA), atomic decomposition on overcomplete dictionaries, compressed sensing, and decoding real field codes. Contrary to previous methods, which usually solve this problem by minimizing the l 1 norm using linear programming (LP) techniques, our algorithm tries to directly minimize the l 1 norm. It is experimentally shown that the proposed algorithm is about two to three orders of magnitude faster than the state-of-the-art interior-point LP solvers, while providing the same (or better) accuracy.

1,033 citations


Journal ArticleDOI
TL;DR: This paper finds a simple regularized version of Orthogonal Matching Pursuit (ROMP) which has advantages of both approaches: the speed and transparency of OMP and the strong uniform guarantees of L1-minimization.
Abstract: This paper seeks to bridge the two major algorithmic approaches to sparse signal recovery from an incomplete set of linear measurements—L1-minimization methods and iterative methods (Matching Pursuits). We find a simple regularized version of Orthogonal Matching Pursuit (ROMP) which has advantages of both approaches: the speed and transparency of OMP and the strong uniform guarantees of L1-minimization. Our algorithm, ROMP, reconstructs a sparse signal in a number of iterations linear in the sparsity, and the reconstruction is exact provided the linear measurements satisfy the uniform uncertainty principle.

998 citations


Journal ArticleDOI
TL;DR: This paper develops a general framework for robust and efficient recovery of nonlinear but structured signal models, in which x lies in a union of subspaces, and presents an equivalence condition under which the proposed convex algorithm is guaranteed to recover the original signal.
Abstract: Traditional sampling theories consider the problem of reconstructing an unknown signal x from a series of samples. A prevalent assumption which often guarantees recovery from the given measurements is that x lies in a known subspace. Recently, there has been growing interest in nonlinear but structured signal models, in which x lies in a union of subspaces. In this paper, we develop a general framework for robust and efficient recovery of such signals from a given set of samples. More specifically, we treat the case in which x lies in a sum of k subspaces, chosen from a larger set of m possibilities. The samples are modeled as inner products with an arbitrary set of sampling functions. To derive an efficient and robust recovery algorithm, we show that our problem can be formulated as that of recovering a block-sparse vector whose nonzero elements appear in fixed blocks. We then propose a mixed lscr2/lscr1 program for block sparse recovery. Our main result is an equivalence condition under which the proposed convex algorithm is guaranteed to recover the original signal. This result relies on the notion of block restricted isometry property (RIP), which is a generalization of the standard RIP used extensively in the context of compressed sensing. Based on RIP, we also prove stability of our approach in the presence of noise and modeling errors. A special case of our framework is that of recovering multiple measurement vectors (MMV) that share a joint sparsity pattern. Adapting our results to this context leads to new MMV recovery methods as well as equivalence conditions under which the entire set can be determined efficiently.

Journal ArticleDOI
TL;DR: In this paper, an advanced image reconstruction algorithm for pseudothermal ghost imaging, based on compressed sensing, is presented. But the algorithm is limited to pseudothermal images and cannot be applied to images taken from other pseudothermal imaging experiments.
Abstract: We describe an advanced image reconstruction algorithm for pseudothermal ghost imaging, reducing the number of measurements required for image recovery by an order of magnitude. The algorithm is based on compressed sensing, a technique that enables the reconstruction of an N-pixel image from much less than N measurements. We demonstrate the algorithm using experimental data from a pseudothermal ghost-imaging setup. The algorithm can be applied to data taken from past pseudothermal ghost-imaging experiments, improving the reconstruction’s quality.

Journal ArticleDOI
TL;DR: This paper describes how to choose the parameters of the multi-coset sampling so that a unique multiband signal matches the given samples, and develops a theoretical lower bound on the average sampling rate required for blind signal reconstruction, which is twice the minimal rate of known-spectrum recovery.
Abstract: We address the problem of reconstructing a multiband signal from its sub-Nyquist pointwise samples, when the band locations are unknown. Our approach assumes an existing multi-coset sampling. To date, recovery methods for this sampling strategy ensure perfect reconstruction either when the band locations are known, or under strict restrictions on the possible spectral supports. In this paper, only the number of bands and their widths are assumed without any other limitations on the support. We describe how to choose the parameters of the multi-coset sampling so that a unique multiband signal matches the given samples. To recover the signal, the continuous reconstruction is replaced by a single finite-dimensional problem without the need for discretization. The resulting problem is studied within the framework of compressed sensing, and thus can be solved efficiently using known tractable algorithms from this emerging area. We also develop a theoretical lower bound on the average sampling rate required for blind signal reconstruction, which is twice the minimal rate of known-spectrum recovery. Our method ensures perfect reconstruction for a wide class of signals sampled at the minimal rate, and provides a first systematic study of compressed sensing in a truly analog setting. Numerical experiments are presented demonstrating blind sampling and reconstruction with minimal sampling rate.

Journal ArticleDOI
11 Dec 2009
TL;DR: Various channel estimators that exploit the channel sparsity in a multicarrier underwater acoustic system are presented, including subspace algorithms from the array precessing literature, namely root-MUSIC and ESPRIT, and recent compressed sensing algorithms in form of Orthogonal Matching Pursuit and Basis Pursuit.
Abstract: In this paper, we investigate various channel estimators that exploit channel sparsity in the time and/or Doppler domain for a multicarrier underwater acoustic system. We use a path-based channel model, where the channel is described by a limited number of paths, each characterized by a delay, Doppler scale, and attenuation factor, and derive the exact inter-carrier-interference (ICI) pattern. For channels that have limited Doppler spread we show that subspace algorithms from the array processing literature, namely Root-MUSIC and ESPRIT, can be applied for channel estimation. For channels with Doppler spread, we adopt a compressed sensing approach, in form of Orthogonal Matching Pursuit (OMP) and Basis Pursuit (BP) algorithms, and utilize overcomplete dictionaries with an increased path delay resolution. Numerical simulation and experimental data of an OFDM block-by-block receiver are used to evaluate the proposed algorithms in comparison to the conventional least-squares (LS) channel estimator. We observe that subspace methods can tolerate small to moderate Doppler effects, and outperform the LS approach when the channel is indeed sparse. On the other hand, compressed sensing algorithms uniformly outperform the LS and subspace methods. Coupled with a channel equalizer mitigating ICI, the compressed sensing algorithms can effectively handle channels with significant Doppler spread.

Journal ArticleDOI
TL;DR: An extension of k‐t FOCUSS to a more general framework with prediction and residual encoding, where the prediction provides an initial estimate and the residual encoding takes care of the remaining residual signals.
Abstract: A model-based dynamic MRI called k-t BLAST/SENSE has drawn significant attention from the MR imaging community because of its improved spatio-temporal resolution. Recently, we showed that the k-t BLAST/SENSE corresponds to the special case of a new dynamic MRI algorithm called k-t FOCUSS that is optimal from a compressed sensing perspective. The main contribution of this article is an extension of k-t FOCUSS to a more general framework with prediction and residual encoding, where the prediction provides an initial estimate and the residual encoding takes care of the remaining residual signals. Two prediction methods, RIGR and motion estimation/compensation scheme, are proposed, which significantly sparsify the residual signals. Then, using a more sophisticated random sampling pattern and optimized temporal transform, the residual signal can be effectively estimated from a very small number of k-t samples. Experimental results show that excellent reconstruction can be achieved even from severely limited k-t samples without aliasing artifacts. Magn Reson Med 61:103–116, 2009.

Proceedings ArticleDOI
19 Apr 2009
TL;DR: A new approach to adaptive system identification when the system model is sparse is proposed, which results in a zero-attracting LMS and a reweighted zero attractor, and it is proved that the ZA-LMS can achieve lower mean square error than the standard LMS.
Abstract: We propose a new approach to adaptive system identification when the system model is sparse. The approach applies l 1 relaxation, common in compressive sensing, to improve the performance of LMS-type adaptive methods. This results in two new algorithms, the zero-attracting LMS (ZA-LMS) and the reweighted zero-attracting LMS (RZA-LMS). The ZA-LMS is derived via combining a l 1 norm penalty on the coefficients into the quadratic LMS cost function, which generates a zero attractor in the LMS iteration. The zero attractor promotes sparsity in taps during the filtering process, and therefore accelerates convergence when identifying sparse systems. We prove that the ZA-LMS can achieve lower mean square error than the standard LMS. To further improve the filtering performance, the RZA-LMS is developed using a reweighted zero attractor. The performance of the RZA-LMS is superior to that of the ZA-LMS numerically. Experiments demonstrate the advantages of the proposed filters in both convergence rate and steady-state behavior under sparsity assumptions on the true coefficient vector. The RZA-LMS is also shown to be robust when the number of non-zero taps increases.

Journal ArticleDOI
TL;DR: A framework for the joint design and optimization, from a set of training images, of the nonparametric dictionary and the sensing matrix is introduced and it is shown that this joint optimization outperforms both the use of random sensing matrices and those matrices that are optimized independently of the learning of the dictionary.
Abstract: Sparse signal representation, analysis, and sensing have received a lot of attention in recent years from the signal processing, optimization, and learning communities. On one hand, learning overcomplete dictionaries that facilitate a sparse representation of the data as a liner combination of a few atoms from such dictionary leads to state-of-the-art results in image and video restoration and classification. On the other hand, the framework of compressed sensing (CS) has shown that sparse signals can be recovered from far less samples than those required by the classical Shannon-Nyquist Theorem. The samples used in CS correspond to linear projections obtained by a sensing projection matrix. It has been shown that, for example, a nonadaptive random sampling matrix satisfies the fundamental theoretical requirements of CS, enjoying the additional benefit of universality. On the other hand, a projection sensing matrix that is optimally designed for a certain class of signals can further improve the reconstruction accuracy or further reduce the necessary number of samples. In this paper, we introduce a framework for the joint design and optimization, from a set of training images, of the nonparametric dictionary and the sensing matrix. We show that this joint optimization outperforms both the use of random sensing matrices and those matrices that are optimized independently of the learning of the dictionary. Particular cases of the proposed framework include the optimization of the sensing matrix for a given dictionary as well as the optimization of the dictionary for a predefined sensing environment. The presentation of the framework and its efficient numerical optimization is complemented with numerous examples on classical image datasets.

Journal ArticleDOI
28 Jun 2009
TL;DR: The idea of the proposed solution (modified-CS) is to solve a convex relaxation of the following problem: find the signal that satisfies the data constraint and is sparsest outside of T, and obtain sufficient conditions for exact reconstruction using modified-CS.
Abstract: We study the problem of reconstructing a sparse signal from a limited number of its linear projections when a part of its support is known, although the known part may contain some errors. The “known” part of the support, denoted T, may be available from prior knowledge. Alternatively, in a problem of recursively reconstructing time sequences of sparse spatial signals, one may use the support estimate from the previous time instant as the “known” part. The idea of our proposed solution (modified-CS) is to solve a convex relaxation of the following problem: find the signal that satisfies the data constraint and is sparsest outside of T. We obtain sufficient conditions for exact reconstruction using modified-CS. These are much weaker than those needed for compressive sensing (CS) when the sizes of the unknown part of the support and of errors in the known part are small compared to the support size. An important extension called regularized modified-CS (RegModCS) is developed which also uses prior signal estimate knowledge. Simulation comparisons for both sparse and compressible signals are shown.

Journal ArticleDOI
Hengyong Yu1, Ge Wang
TL;DR: It is proved that if an object under reconstruction is essentially piecewise constant, a local ROI can be exactly and stably reconstructed via the total variation minimization through an iterative interior reconstruction algorithm.
Abstract: While conventional wisdom is that the interior problem does not have a unique solution, by analytic continuation we recently showed that the interior problem can be uniquely and stably solved if we have a known sub-region inside a region of interest (ROI). However, such a known sub-region is not always readily available, and it is even impossible to find in some cases. Based on compressed sensing theory, here we prove that if an object under reconstruction is essentially piecewise constant, a local ROI can be exactly and stably reconstructed via the total variation minimization. Because many objects in computed tomography (CT) applications can be approximately modeled as piecewise constant, our approach is practically useful and suggests a new research direction for interior tomography. To illustrate the merits of our finding, we develop an iterative interior reconstruction algorithm that minimizes the total variation of a reconstructed image and evaluate the performance in numerical simulation.

Journal ArticleDOI
TL;DR: For a noisy linear observation model based on random measurement matrices drawn from general Gaussian measurementMatrices, this paper derives both a set of sufficient conditions for exact support recovery using an exhaustive search decoder, as well as aset of necessary conditions that any decoder must satisfy for exactSupport set recovery.
Abstract: The problem of sparsity pattern or support set recovery refers to estimating the set of nonzero coefficients of an unknown vector beta* isin Ropfp based on a set of n noisy observations. It arises in a variety of settings, including subset selection in regression, graphical model selection, signal denoising, compressive sensing, and constructive approximation. The sample complexity of a given method for subset recovery refers to the scaling of the required sample size n as a function of the signal dimension p, sparsity index k (number of non-zeroes in beta*), as well as the minimum value betamin of beta* over its support and other parameters of measurement matrix. This paper studies the information-theoretic limits of sparsity recovery: in particular, for a noisy linear observation model based on random measurement matrices drawn from general Gaussian measurement matrices, we derive both a set of sufficient conditions for exact support recovery using an exhaustive search decoder, as well as a set of necessary conditions that any decoder, regardless of its computational complexity, must satisfy for exact support recovery. This analysis of fundamental limits complements our previous work on sharp thresholds for support set recovery over the same set of random measurement ensembles using the polynomial-time Lasso method (lscr1-constrained quadratic programming).

Journal ArticleDOI
TL;DR: It has been demonstrated that with appropriate design of the compressive measurements used to define v, the decompressive mapping vrarru may be performed with error with asymptotic properties analogous to those of the best adaptive transform-coding algorithm applied in the basis Psi.
Abstract: Compressive sensing (CS) is a framework whereby one performs N nonadaptive measurements to constitute a vector v isin RN used to recover an approximation u isin RM desired signal u isin RM with N 1 sets of compressive measurements {vi}i=1,L are performed, each of the associated {ui}i=1,Lare recovered one at a time, independently. In many applications the L ldquotasksrdquo defined by the mappings virarrui are not statistically independent, and it may be possible to improve the performance of the inversion if statistical interrelationships are exploited. In this paper, we address this problem within a multitask learning setting, wherein the mapping vrarru for each task corresponds to inferring the parameters (here, wavelet coefficients) associated with the desired signal vi, and a shared prior is placed across all of the L tasks. Under this hierarchical Bayesian modeling, data from all L tasks contribute toward inferring a posterior on the hyperparameters, and once the shared prior is thereby inferred, the data from each of the L individual tasks is then employed to estimate the task-dependent wavelet coefficients. An empirical Bayesian procedure for the estimation of hyperparameters is considered; two fast inference algorithms extending the relevance vector machine (RVM) are developed. Example results on several data sets demonstrate the effectiveness and robustness of the proposed algorithms.

Posted Content
TL;DR: This paper investigates a new learning formulation called structured sparsity, which is a natural extension of the standard sparsity concept in statistical learning and compressive sensing by allowing arbitrary structures on the feature set, which generalizes the group sparsity idea.
Abstract: This paper investigates a new learning formulation called structured sparsity, which is a natural extension of the standard sparsity concept in statistical learning and compressive sensing. By allowing arbitrary structures on the feature set, this concept generalizes the group sparsity idea that has become popular in recent years. A general theory is developed for learning with structured sparsity, based on the notion of coding complexity associated with the structure. It is shown that if the coding complexity of the target signal is small, then one can achieve improved performance by using coding complexity regularization methods, which generalize the standard sparse regularization. Moreover, a structured greedy algorithm is proposed to efficiently solve the structured sparsity problem. It is shown that the greedy algorithm approximately solves the coding complexity optimization problem under appropriate conditions. Experiments are included to demonstrate the advantage of structured sparsity over standard sparsity on some real applications.

Posted Content
TL;DR: In this paper, a general theory for a variant of the error correcting output code scheme, using ideas from compressed sensing for exploiting output sparsity, was developed, which can be regarded as a simple reduction from multi-label regression problems to binary regression problems.
Abstract: We consider multi-label prediction problems with large output spaces under the assumption of output sparsity -- that the target (label) vectors have small support. We develop a general theory for a variant of the popular error correcting output code scheme, using ideas from compressed sensing for exploiting this sparsity. The method can be regarded as a simple reduction from multi-label regression problems to binary regression problems. We show that the number of subproblems need only be logarithmic in the total number of possible labels, making this approach radically more efficient than others. We also state and prove robustness guarantees for this method in the form of regret transform bounds (in general), and also provide a more detailed analysis for the linear prediction setting.

Journal ArticleDOI
TL;DR: An extensive computational experiment and formal inferential analysis is conducted to test the hypothesis that phase transitions occurring in modern high-dimensional data analysis and signal processing are universal across a range of underlying matrix ensembles, and shows that finite-sample universality can be rejected.
Abstract: We review connections between phase transitions in high-dimensional combinatorial geometry and phase transitions occurring in modern high-dimensional data analysis and signal processing. In data analysis, such transitions arise as abrupt breakdown of linear model selection, robust data fitting or compressed sensing reconstructions, when the complexity of the model or the number of outliers increases beyond a threshold. In combinatorial geometry these transitions appear as abrupt changes in the properties of face counts of convex polytopes when the dimensions are varied. The thresholds in these very different problems appear in the same critical locations after appropriate calibration of variables. These thresholds are important in each subject area: for linear modelling, they place hard limits on the degree to which the now-ubiquitous high-throughput data analysis can be successful; for robustness, they place hard limits on the degree to which standard robust fitting methods can tolerate outliers before breaking down; for compressed sensing, they define the sharp boundary of the undersampling/sparsity tradeoff in undersampling theorems. Existing derivations of phase transitions in combinatorial geometry assume the underlying matrices have independent and identically distributed (iid) Gaussian elements. In applications, however, it often seems that Gaussianity is not required. We conducted an extensive computational experiment and formal inferential analysis to test the hypothesis that these phase transitions are {\it universal} across a range of underlying matrix ensembles. The experimental results are consistent with an asymptotic large-$n$ universality across matrix ensembles; finite-sample universality can be rejected.

Proceedings ArticleDOI
07 Nov 2009
TL;DR: Block-based random image sampling is coupled with a projection-driven compressed-sensing recovery that encourages sparsity in the domain of directional transforms simultaneously with a smooth reconstructed image, yielding images with quality that matches or exceeds that produced by a popular, yet computationally expensive, technique which minimizes total variation.
Abstract: Block-based random image sampling is coupled with a projection-driven compressed-sensing recovery that encourages sparsity in the domain of directional transforms simultaneously with a smooth reconstructed image. Both contourlets as well as complex-valued dual-tree wavelets are considered for their highly directional representation, while bivariate shrinkage is adapted to their multiscale decomposition structure to provide the requisite sparsity constraint. Smoothing is achieved via a Wiener filter incorporated into iterative projected Landweber compressed-sensing recovery, yielding fast reconstruction. The proposed approach yields images with quality that matches or exceeds that produced by a popular, yet computationally expensive, technique which minimizes total variation. Additionally, reconstruction quality is substantially superior to that from several prominent pursuits-based algorithms that do not include any smoothing.

Proceedings ArticleDOI
16 Aug 2009
TL;DR: This work develops a novel spatio-temporal compressive sensing framework with two key components: a new technique called Sparsity Regularized Matrix Factorization (SRMF) that leverages the sparse or low-rank nature of real-world traffic matrices and their spatio/temporal properties, and a mechanism for combining low- rank approximations with local interpolation procedures.
Abstract: Many basic network engineering tasks (e.g., traffic engineering, capacity planning, anomaly detection) rely heavily on the availability and accuracy of traffic matrices. However, in practice it is challenging to reliably measure traffic matrices. Missing values are common. This observation brings us into the realm of compressive sensing, a generic technique for dealing with missing values that exploits the presence of structure and redundancy in many real-world systems. Despite much recent progress made in compressive sensing, existing compressive-sensing solutions often perform poorly for traffic matrix interpolation, because real traffic matrices rarely satisfy the technical conditions required for these solutions.To address this problem, we develop a novel spatio-temporal compressive sensing framework with two key components: (i) a new technique called Sparsity Regularized Matrix Factorization (SRMF) that leverages the sparse or low-rank nature of real-world traffic matrices and their spatio-temporal properties, and (ii) a mechanism for combining low-rank approximations with local interpolation procedures. We illustrate our new framework and demonstrate its superior performance in problems involving interpolation with real traffic matrices where we can successfully replace up to 98% of the values. Evaluation in applications such as network tomography, traffic prediction, and anomaly detection confirms the flexibility and effectiveness of our approach.

ReportDOI
TL;DR: A new theory for distributed compressive sensing (DCS) is introduced that enables new distributed coding algorithms for multi-signal ensembles that exploit both intra- and inter-Signal correlation structures.
Abstract: : Compressive sensing is a signal acquisition framework based on the revelation that a small collection of linear projections of a sparse signal contains enough information for stable recovery. In this paper we introduce a new theory for distributed compressive sensing (DCS) that enables new distributed coding algorithms for multi-signal ensembles that exploit both intra- and inter-signal correlation structures. The DCS theory rests on a new concept that we term the joint sparsity of a signal ensemble. Our theoretical contribution is to characterize the fundamental performance limits of DCS recovery for jointly sparse signal ensembles in the noiseless measurement setting; our result connects single-signal, joint, and distributed (multi-encoder) compressive sensing. To demonstrate the efficacy of our framework and to show that additional challenges such as computational tractability can be addressed, we study in detail three example models for jointly sparse signals. For these models, we develop practical algorithms for joint recovery of multiple signals from incoherent projections. In two of our three models, the results are asymptotically best-possible, meaning that both the upper and lower bounds match the performance of our practical algorithms. Moreover, simulations indicate that the asymptotics take effect with just a moderate number of signals. DCS is immediately applicable to a range of problems in sensor arrays and networks.

Journal ArticleDOI
TL;DR: This paper considers a more general signal model and assumes signals that live on or close to the union of linear subspaces of low dimension, and presents sampling theorems for this model that are in the same spirit as the Nyquist-Shannon sampling theorem in that they connect the number of required samples to certain model parameters.
Abstract: Compressed sensing is an emerging signal acquisition technique that enables signals to be sampled well below the Nyquist rate, given that the signal has a sparse representation in an orthonormal basis. In fact, sparsity in an orthonormal basis is only one possible signal model that allows for sampling strategies below the Nyquist rate. In this paper, we consider a more general signal model and assume signals that live on or close to the union of linear subspaces of low dimension. We present sampling theorems for this model that are in the same spirit as the Nyquist-Shannon sampling theorem in that they connect the number of required samples to certain model parameters. Contrary to the Nyquist-Shannon sampling theorem, which gives a necessary and sufficient condition for the number of required samples as well as a simple linear algorithm for signal reconstruction, the model studied here is more complex. We therefore concentrate on two aspects of the signal model, the existence of one to one maps to lower dimensional observation spaces and the smoothness of the inverse map. We show that almost all linear maps are one to one when the observation space is at least of the same dimension as the largest dimension of the convex hull of the union of any two subspaces in the model. However, we also show that in order for the inverse map to have certain smoothness properties such as a given finite Lipschitz constant, the required observation dimension necessarily depends logarithmically on the number of subspaces in the signal model. In other words, while unique linear sampling schemes require a small number of samples depending only on the dimension of the subspaces involved, in order to have stable sampling methods, the number of samples depends necessarily logarithmically on the number of subspaces in the model. These results are then applied to two examples, the standard compressed sensing signal model in which the signal has a sparse representation in an orthonormal basis and to a sparse signal model with additional tree structure.

Journal ArticleDOI
TL;DR: This paper analyzes the convergence of linearized Bregman iterations and derives a new algorithm that is proven to be convergent with a rate and can be used as another choice of an efficient tool in compressed sensing.
Abstract: Finding a solution of a linear equation Au = f with various minimization properties arises from many applications. One such application is compressed sensing, where an efficient and robust-to-noise algorithm to find a minimal l 1 norm solution is needed. This means that the algorithm should be tailored for large scale and completely dense matrices A, while Au and A T u can be computed by fast transforms and the solution we seek is sparse. Recently, a simple and fast algorithm based on linearized Bregman iteration was proposed in [28, 32] for this purpose. This paper is to analyze the convergence of linearized Bregman iterations and the minimization properties of their limit. Based on our analysis here, we derive also a new algorithm that is proven to be convergent with a rate. Furthermore, the new algorithm is simple and fast in approximating a minimal l 1 norm solution of Au = f as shown by numerical simulations. Hence, it can be used as another choice of an efficient tool in compressed sensing.

Journal ArticleDOI
TL;DR: A novel data acquisition and imaging method is presented for stepped-frequency continuous-wave ground penetrating radars (SFCW GPRs) and it is shown that if the target space is sparse, it is enough to make measurements at only a small number of random frequencies to construct an image of thetarget space by solving a convex optimization problem which enforces sparsity through lscr 1 minimization.
Abstract: A novel data acquisition and imaging method is presented for stepped-frequency continuous-wave ground penetrating radars (SFCW GPRs). It is shown that if the target space is sparse, i.e., a small number of point like targets, it is enough to make measurements at only a small number of random frequencies to construct an image of the target space by solving a convex optimization problem which enforces sparsity through lscr 1 minimization. This measurement strategy greatly reduces the data acquisition time at the expense of higher computational costs. Imaging results for both simulated and experimental GPR data exhibit less clutter than the standard migration methods and are robust to noise and random spatial sampling. The images also have increased resolution where closely spaced targets that cannot be resolved by the standard migration methods can be resolved by the proposed method.