Journal ArticleDOI
Bayesian Compressive Sensing
Shihao Ji,Ya Xue,Lawrence Carin +2 more
TLDR
The underlying theory, an associated algorithm, example results, and comparisons to other compressive-sensing inversion algorithms in the literature are presented.Abstract:
The data of interest are assumed to be represented as N-dimensional real vectors, and these vectors are compressible in some linear basis B, implying that the signal can be reconstructed accurately using only a small number M Lt N of basis-function coefficients associated with B. Compressive sensing is a framework whereby one does not measure one of the aforementioned N-dimensional signals directly, but rather a set of related measurements, with the new measurements a linear combination of the original underlying N-dimensional signal. The number of required compressive-sensing measurements is typically much smaller than N, offering the potential to simplify the sensing system. Let f denote the unknown underlying N-dimensional signal, and g a vector of compressive-sensing measurements, then one may approximate f accurately by utilizing knowledge of the (under-determined) linear relationship between f and g, in addition to knowledge of the fact that f is compressible in B. In this paper we employ a Bayesian formalism for estimating the underlying signal f based on compressive-sensing measurements g. The proposed framework has the following properties: i) in addition to estimating the underlying signal f, "error bars" are also estimated, these giving a measure of confidence in the inverted signal; ii) using knowledge of the error bars, a principled means is provided for determining when a sufficient number of compressive-sensing measurements have been performed; iii) this setting lends itself naturally to a framework whereby the compressive sensing measurements are optimized adaptively and hence not determined randomly; and iv) the framework accounts for additive noise in the compressive-sensing measurements and provides an estimate of the noise variance. In this paper we present the underlying theory, an associated algorithm, example results, and provide comparisons to other compressive-sensing inversion algorithms in the literature.read more
Citations
More filters
Book
Nonnegative Matrix and Tensor Factorizations: Applications to Exploratory Multi-way Data Analysis and Blind Source Separation
TL;DR: This book provides a broad survey of models and efficient algorithms for Nonnegative Matrix Factorization (NMF), including NMFs various extensions and modifications, especially Nonnegative Tensor Factorizations (NTF) and Nonnegative Tucker Decompositions (NTD).
MonographDOI
Nonnegative Matrix and Tensor Factorizations
TL;DR: A broad survey of models and efficient algorithms for nonnegative matrix factorization (NMF) can be found in this paper, where the authors focus on the algorithms that are most useful in practice, looking at the fastest, most robust, and suitable for large-scale models.
Journal ArticleDOI
Structured Compressed Sensing: From Theory to Applications
Marco F. Duarte,Yonina C. Eldar +1 more
TL;DR: The prime focus is bridging theory and practice, to pinpoint the potential of structured CS strategies to emerge from the math to the hardware in compressive sensing.
Journal ArticleDOI
Sparse Signal Recovery With Temporally Correlated Source Vectors Using Sparse Bayesian Learning
Zhilin Zhang,Bhaskar D. Rao +1 more
TL;DR: This paper derives two sparse Bayesian learning algorithms, which have superior recovery performance compared to existing algorithms, especially in the presence of high temporal correlation, and provides analysis of the global and local minima of their cost function.
Journal ArticleDOI
Sparsity preserving projections with applications to face recognition
TL;DR: A new unsupervised DR method called sparsity preserving projections (SPP), which aims to preserve the sparse reconstructive relationship of the data, which is achieved by minimizing a L1 regularization-related objective function.
References
More filters
Book
Elements of information theory
Thomas M. Cover,Joy A. Thomas +1 more
TL;DR: The author examines the role of entropy, inequality, and randomness in the design of codes and the construction of codes in the rapidly changing environment.
Journal ArticleDOI
Regression Shrinkage and Selection via the Lasso
TL;DR: A new method for estimation in linear models called the lasso, which minimizes the residual sum of squares subject to the sum of the absolute value of the coefficients being less than a constant, is proposed.
Book
Compressed sensing
TL;DR: It is possible to design n=O(Nlog(m)) nonadaptive measurements allowing reconstruction with accuracy comparable to that attainable with direct knowledge of the N most important coefficients, and a good approximation to those N important coefficients is extracted from the n measurements by solving a linear program-Basis Pursuit in signal processing.
Book
A wavelet tour of signal processing
TL;DR: An introduction to a Transient World and an Approximation Tour of Wavelet Packet and Local Cosine Bases.
Book
Ten lectures on wavelets
TL;DR: This paper presents a meta-analyses of the wavelet transforms of Coxeter’s inequality and its applications to multiresolutional analysis and orthonormal bases.