scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Bayesian Compressive Sensing Using Laplace Priors

TL;DR: This paper model the components of the compressive sensing (CS) problem, i.e., the signal acquisition process, the unknown signal coefficients and the model parameters for the signal and noise using the Bayesian framework and develops a constructive (greedy) algorithm designed for fast reconstruction useful in practical settings.
Abstract: In this paper, we model the components of the compressive sensing (CS) problem, i.e., the signal acquisition process, the unknown signal coefficients and the model parameters for the signal and noise using the Bayesian framework. We utilize a hierarchical form of the Laplace prior to model the sparsity of the unknown signal. We describe the relationship among a number of sparsity priors proposed in the literature, and show the advantages of the proposed model including its high degree of sparsity. Moreover, we show that some of the existing models are special cases of the proposed model. Using our model, we develop a constructive (greedy) algorithm designed for fast reconstruction useful in practical settings. Unlike most existing CS reconstruction methods, the proposed algorithm is fully automated, i.e., the unknown signal coefficients and all necessary parameters are estimated solely from the observation, and, therefore, no user-intervention is needed. Additionally, the proposed algorithm provides estimates of the uncertainty of the reconstructions. We provide experimental results with synthetic 1-D signals and images, and compare with the state-of-the-art CS reconstruction algorithms demonstrating the superior performance of the proposed approach.
Citations
More filters
Journal ArticleDOI
TL;DR: An iterative algorithm is developed based on the off-grid model from a Bayesian perspective while joint sparsity among different snapshots is exploited by assuming a Laplace prior for signals at all snapshots.
Abstract: Direction of arrival (DOA) estimation is a classical problem in signal processing with many practical applications. Its research has recently been advanced owing to the development of methods based on sparse signal reconstruction. While these methods have shown advantages over conventional ones, there are still difficulties in practical situations where true DOAs are not on the discretized sampling grid. To deal with such an off-grid DOA estimation problem, this paper studies an off-grid model that takes into account effects of the off-grid DOAs and has a smaller modeling error. An iterative algorithm is developed based on the off-grid model from a Bayesian perspective while joint sparsity among different snapshots is exploited by assuming a Laplace prior for signals at all snapshots. The new approach applies to both single snapshot and multi-snapshot cases. Numerical simulations show that the proposed algorithm has improved accuracy in terms of mean squared estimation error. The algorithm can maintain high estimation accuracy even under a very coarse sampling grid.

623 citations


Cites background or methods from "Bayesian Compressive Sensing Using ..."

  • ...As an example, a Laplace signal prior leads to a maximum a posteriori (MAP) optimal estimate that coincides with an optimal solution to the optimization[10]....

    [...]

  • ...Similar approaches have been used in standard Bayesian CS methods [9], [10]....

    [...]

  • ...According to [10], for both and are Laplace distributed and share the same PDF that is strongly peaked at the origin....

    [...]

Journal ArticleDOI
TL;DR: A review of the state-of-the-art and most recent advances of compressive sensing and related methods as applied to electromagnetics can be found in this article, where a wide set of applicative scenarios comprising the diagnosis and synthesis of antenna arrays, the estimation of directions of arrival, and the solution of inverse scattering and radar imaging problems are reviewed.
Abstract: Several problems arising in electromagnetics can be directly formulated or suitably recast for an effective solution within the compressive sensing (CS) framework. This has motivated a great interest in developing and applying CS methodologies to several conventional and innovative electromagnetic scenarios. This work is aimed at presenting, to the best of the authors’ knowledge, a review of the state-of-the-art and most recent advances of CS formulations and related methods as applied to electromagnetics. Toward this end, a wide set of applicative scenarios comprising the diagnosis and synthesis of antenna arrays, the estimation of directions of arrival, and the solution of inverse scattering and radar imaging problems are reviewed. Current challenges and trends in the application of CS to the solution of traditional and new electromagnetic problems are also discussed.

318 citations


Cites background or methods from "Bayesian Compressive Sensing Using ..."

  • ...In addition, BCS approaches [9] have been investigated for solving the perturbed CS problem [68]....

    [...]

  • ...Due to their effectiveness, which often outperforms deterministic CS strategies in terms of reconstruction accuracy and robustness [8, 9, 47, 48, 51], as well as the availability of standard implementations of BCS and MT-BCS techniques [50], BCS with Laplace prior algorithms [51], and fast Bayesian matching pursuit [52] techniques, Bayesian approaches have been widely adopted in electromagnetics (see Section 4)....

    [...]

  • ...For instance, “hierarchical-Laplace” priors on x have been included in [8, 9] by replacing (21) with PðxÞ 1⁄4 2 exp 2 kxk‘1 (25)...

    [...]

  • ...Early developments included the application of CS strategies to imaging problems linearized through the Born approximation (BA) and comprising Laplace priors [9], as envisaged in [77–79]....

    [...]

  • ...For example, the off-grid DoA detection has been formulated with Laplace priors in [9] with enhanced performances with respect to [18], but at the expense of a slower speed, particularly when a dense angular grid (i....

    [...]

Journal ArticleDOI
TL;DR: A robust recurrent neural network is presented in a Bayesian framework based on echo state mechanisms that is robust in the presence of outliers and is superior to existing methods.
Abstract: In this paper, a robust recurrent neural network is presented in a Bayesian framework based on echo state mechanisms. Since the new model is capable of handling outliers in the training data set, it is termed as a robust echo state network (RESN). The RESN inherits the basic idea of ESN learning in a Bayesian framework, but replaces the commonly used Gaussian distribution with a Laplace one, which is more robust to outliers, as the likelihood function of the model output. Moreover, the training of the RESN is facilitated by employing a bound optimization algorithm, based on which, a proper surrogate function is derived and the Laplace likelihood function is approximated by a Gaussian one, while remaining robust to outliers. It leads to an efficient method for estimating model parameters, which can be solved by using a Bayesian evidence procedure in a fully autonomous way. Experimental results show that the proposed method is robust in the presence of outliers and is superior to existing methods.

294 citations


Cites background from "Bayesian Compressive Sensing Using ..."

  • ...They include providing probabilistic prediction and automatic estimation of model parameters [22], [23]....

    [...]

Journal ArticleDOI
TL;DR: A sparse Bayesian method is introduced by exploiting Laplace priors, namely, SBLaplace, for EEG classification by learning a sparse discriminant vector with a Laplace prior in a hierarchical fashion under a Bayesian evidence framework.
Abstract: Regularization has been one of the most popular approaches to prevent overfitting in electroencephalogram (EEG) classification of brain–computer interfaces (BCIs). The effectiveness of regularization is often highly dependent on the selection of regularization parameters that are typically determined by cross-validation (CV). However, the CV imposes two main limitations on BCIs: 1) a large amount of training data is required from the user and 2) it takes a relatively long time to calibrate the classifier. These limitations substantially deteriorate the system’s practicability and may cause a user to be reluctant to use BCIs. In this paper, we introduce a sparse Bayesian method by exploiting Laplace priors, namely, SBLaplace, for EEG classification. A sparse discriminant vector is learned with a Laplace prior in a hierarchical fashion under a Bayesian evidence framework. All required model parameters are automatically estimated from training data without the need of CV. Extensive comparisons are carried out between the SBLaplace algorithm and several other competing methods based on two EEG data sets. The experimental results demonstrate that the SBLaplace algorithm achieves better overall performance than the competing algorithms for EEG classification.

232 citations


Cites background or methods from "Bayesian Compressive Sensing Using ..."

  • ...With Bayesian linear regression, Hoffmann et al. [16] introduced BLDA to EEG classification for P300-based BCIs....

    [...]

  • ...Currently, one of the most popularly adopted EEG patterns for BCI development is event-related potential (ERP), which is time- and phase-locked to stimulus events of interest [6]–[8]....

    [...]

Journal ArticleDOI
TL;DR: The experimental results show that the proposed algorithm outperforms many state-of-the-art algorithms, and solves the inverse problem automatically-prior information on the number of clusters and the size of each cluster is unknown.

202 citations


Cites methods from "Bayesian Compressive Sensing Using ..."

  • ...Exploiting sparsity probabilistic model [18], many algorithms based on the LVA are proposed to solve the sparse decomposition problems [19, 11, 20, 21, 22, 14]....

    [...]

References
More filters
Book
Christopher M. Bishop1
17 Aug 2006
TL;DR: Probability Distributions, linear models for Regression, Linear Models for Classification, Neural Networks, Graphical Models, Mixture Models and EM, Sampling Methods, Continuous Latent Variables, Sequential Data are studied.
Abstract: Probability Distributions.- Linear Models for Regression.- Linear Models for Classification.- Neural Networks.- Kernel Methods.- Sparse Kernel Machines.- Graphical Models.- Mixture Models and EM.- Approximate Inference.- Sampling Methods.- Continuous Latent Variables.- Sequential Data.- Combining Models.

22,840 citations

Journal ArticleDOI
TL;DR: This book covers a broad range of topics for regular factorial designs and presents all of the material in very mathematical fashion and will surely become an invaluable resource for researchers and graduate students doing research in the design of factorial experiments.
Abstract: (2007). Pattern Recognition and Machine Learning. Technometrics: Vol. 49, No. 3, pp. 366-366.

18,802 citations


"Bayesian Compressive Sensing Using ..." refers methods in this paper

  • ...In the type-II maximum likelihood procedure, we represent by a degenerate distribution where the distribution is replaced by a delta function at its mode, where we assume that this posterior distribu- tion is sharply peaked around its mode [22]....

    [...]

Book
D.L. Donoho1
01 Jan 2004
TL;DR: It is possible to design n=O(Nlog(m)) nonadaptive measurements allowing reconstruction with accuracy comparable to that attainable with direct knowledge of the N most important coefficients, and a good approximation to those N important coefficients is extracted from the n measurements by solving a linear program-Basis Pursuit in signal processing.
Abstract: Suppose x is an unknown vector in Ropfm (a digital image or signal); we plan to measure n general linear functionals of x and then reconstruct. If x is known to be compressible by transform coding with a known transform, and we reconstruct via the nonlinear procedure defined here, the number of measurements n can be dramatically smaller than the size m. Thus, certain natural classes of images with m pixels need only n=O(m1/4log5/2(m)) nonadaptive nonpixel samples for faithful recovery, as opposed to the usual m pixel samples. More specifically, suppose x has a sparse representation in some orthonormal basis (e.g., wavelet, Fourier) or tight frame (e.g., curvelet, Gabor)-so the coefficients belong to an lscrp ball for 0

18,609 citations

Journal ArticleDOI
TL;DR: In this paper, the authors considered the model problem of reconstructing an object from incomplete frequency samples and showed that with probability at least 1-O(N/sup -M/), f can be reconstructed exactly as the solution to the lscr/sub 1/ minimization problem.
Abstract: This paper considers the model problem of reconstructing an object from incomplete frequency samples. Consider a discrete-time signal f/spl isin/C/sup N/ and a randomly chosen set of frequencies /spl Omega/. Is it possible to reconstruct f from the partial knowledge of its Fourier coefficients on the set /spl Omega/? A typical result of this paper is as follows. Suppose that f is a superposition of |T| spikes f(t)=/spl sigma//sub /spl tau//spl isin/T/f(/spl tau/)/spl delta/(t-/spl tau/) obeying |T|/spl les/C/sub M//spl middot/(log N)/sup -1/ /spl middot/ |/spl Omega/| for some constant C/sub M/>0. We do not know the locations of the spikes nor their amplitudes. Then with probability at least 1-O(N/sup -M/), f can be reconstructed exactly as the solution to the /spl lscr//sub 1/ minimization problem. In short, exact recovery may be obtained by solving a convex optimization problem. We give numerical values for C/sub M/ which depend on the desired probability of success. Our result may be interpreted as a novel kind of nonlinear sampling theorem. In effect, it says that any signal made out of |T| spikes may be recovered by convex programming from almost every set of frequencies of size O(|T|/spl middot/logN). Moreover, this is nearly optimal in the sense that any method succeeding with probability 1-O(N/sup -M/) would in general require a number of frequency samples at least proportional to |T|/spl middot/logN. The methodology extends to a variety of other situations and higher dimensions. For example, we show how one can reconstruct a piecewise constant (one- or two-dimensional) object from incomplete frequency samples - provided that the number of jumps (discontinuities) obeys the condition above - by minimizing other convex functionals such as the total variation of f.

14,587 citations

Journal ArticleDOI
TL;DR: Basis Pursuit (BP) is a principle for decomposing a signal into an "optimal" superposition of dictionary elements, where optimal means having the smallest l1 norm of coefficients among all such decompositions.
Abstract: The time-frequency and time-scale communities have recently developed a large number of overcomplete waveform dictionaries --- stationary wavelets, wavelet packets, cosine packets, chirplets, and warplets, to name a few. Decomposition into overcomplete systems is not unique, and several methods for decomposition have been proposed, including the method of frames (MOF), Matching pursuit (MP), and, for special dictionaries, the best orthogonal basis (BOB). Basis Pursuit (BP) is a principle for decomposing a signal into an "optimal" superposition of dictionary elements, where optimal means having the smallest l1 norm of coefficients among all such decompositions. We give examples exhibiting several advantages over MOF, MP, and BOB, including better sparsity and superresolution. BP has interesting relations to ideas in areas as diverse as ill-posed problems, in abstract harmonic analysis, total variation denoising, and multiscale edge denoising. BP in highly overcomplete dictionaries leads to large-scale optimization problems. With signals of length 8192 and a wavelet packet dictionary, one gets an equivalent linear program of size 8192 by 212,992. Such problems can be attacked successfully only because of recent advances in linear programming by interior-point methods. We obtain reasonable success with a primal-dual logarithmic barrier method and conjugate-gradient solver.

9,950 citations


"Bayesian Compressive Sensing Using ..." refers methods in this paper

  • ...Most of the proposed methods are examples of energy minimization methods, including linear programming algorithms [7], [8] and constructive (greedy) algorithms [9]–[11]....

    [...]