scispace - formally typeset
Open AccessJournal ArticleDOI

Expectation-Maximization Gaussian-Mixture Approximate Message Passing

Jeremy Vila, +1 more
- 01 Oct 2013 - 
- Vol. 61, Iss: 19, pp 4658-4672
Reads0
Chats0
TLDR
An empirical-Bayesian technique is proposed that simultaneously learns the signal distribution while MMSE-recovering the signal-according to the learned distribution-using AMP, and model the non-zero distribution as a Gaussian mixture, and learn its parameters through expectation maximization, using AMP to implement the expectation step.
Abstract
When recovering a sparse signal from noisy compressive linear measurements, the distribution of the signal's non-zero coefficients can have a profound effect on recovery mean-squared error (MSE). If this distribution was a priori known, then one could use computationally efficient approximate message passing (AMP) techniques for nearly minimum MSE (MMSE) recovery. In practice, however, the distribution is unknown, motivating the use of robust algorithms like LASSO-which is nearly minimax optimal-at the cost of significantly larger MSE for non-least-favorable distributions. As an alternative, we propose an empirical-Bayesian technique that simultaneously learns the signal distribution while MMSE-recovering the signal-according to the learned distribution-using AMP. In particular, we model the non-zero distribution as a Gaussian mixture and learn its parameters through expectation maximization, using AMP to implement the expectation step. Numerical experiments on a wide range of signal classes confirm the state-of-the-art performance of our approach, in both reconstruction error and runtime, in the high-dimensional regime, for most (but not all) sensing operators.

read more

Citations
More filters
Proceedings ArticleDOI

Hybrid approximate message passing for generalized group sparsity

TL;DR: The HyGAMP method extends the AMP framework to incorporate priors on x described by graphical models of which generalized group sparsity is a special case, and is shown to be computationally efficient, general and offers superior performance in certain synthetic data test cases.
Proceedings ArticleDOI

Joint robust decoding and parameter estimation for convolutionally coded systems impaired by unknown impulse noise

TL;DR: This paper proposes the iterative MEVA (IMEVA): the information regarding the indicator sequence is leveraged and passed to the σ2-estimator for refining the clipping threshold estimate at the next iteration, and simulation results show that the IMEVA performs remarkably close to that of the Baum-Welch algorithm, which, at the cost of receiver complexity, relies on knowing the underlying impulse noise model.
Journal ArticleDOI

A MACH Filter-Based Reconstruction-Free Target Detector and Tracker for Compressive Sensing Cameras

TL;DR: This paper describes a prototype automated detection and tracking system using a compressive sensing camera that does not rely on computationally costly image reconstructions and operates on raw sensor data for an approximately ten-fold improvement in computation time over a comparable reconstruct-then-track algorithm.
References
More filters
Journal ArticleDOI

Regression Shrinkage and Selection via the Lasso

TL;DR: A new method for estimation in linear models called the lasso, which minimizes the residual sum of squares subject to the sum of the absolute value of the coefficients being less than a constant, is proposed.
Journal ArticleDOI

A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems

TL;DR: A new fast iterative shrinkage-thresholding algorithm (FISTA) which preserves the computational simplicity of ISTA but with a global rate of convergence which is proven to be significantly better, both theoretically and practically.
Journal ArticleDOI

Atomic Decomposition by Basis Pursuit

TL;DR: Basis Pursuit (BP) is a principle for decomposing a signal into an "optimal" superposition of dictionary elements, where optimal means having the smallest l1 norm of coefficients among all such decompositions.
Journal ArticleDOI

Sparse bayesian learning and the relevance vector machine

TL;DR: It is demonstrated that by exploiting a probabilistic Bayesian learning framework, the 'relevance vector machine' (RVM) can derive accurate prediction models which typically utilise dramatically fewer basis functions than a comparable SVM while offering a number of additional advantages.