Expectation-Maximization Gaussian-Mixture Approximate Message Passing
Jeremy Vila,Philip Schniter +1 more
Reads0
Chats0
TLDR
An empirical-Bayesian technique is proposed that simultaneously learns the signal distribution while MMSE-recovering the signal-according to the learned distribution-using AMP, and model the non-zero distribution as a Gaussian mixture, and learn its parameters through expectation maximization, using AMP to implement the expectation step.Abstract:
When recovering a sparse signal from noisy compressive linear measurements, the distribution of the signal's non-zero coefficients can have a profound effect on recovery mean-squared error (MSE). If this distribution was a priori known, then one could use computationally efficient approximate message passing (AMP) techniques for nearly minimum MSE (MMSE) recovery. In practice, however, the distribution is unknown, motivating the use of robust algorithms like LASSO-which is nearly minimax optimal-at the cost of significantly larger MSE for non-least-favorable distributions. As an alternative, we propose an empirical-Bayesian technique that simultaneously learns the signal distribution while MMSE-recovering the signal-according to the learned distribution-using AMP. In particular, we model the non-zero distribution as a Gaussian mixture and learn its parameters through expectation maximization, using AMP to implement the expectation step. Numerical experiments on a wide range of signal classes confirm the state-of-the-art performance of our approach, in both reconstruction error and runtime, in the high-dimensional regime, for most (but not all) sensing operators.read more
Citations
More filters
Proceedings ArticleDOI
Multipath Mitigation in Global Navigation Satellite Systems Using a Bayesian Hierarchical Model With Bernoulli Laplacian Priors
TL;DR: A new Bayesian estimation method allowing the MP biases and the unknown model parameters and hyperparameters to be estimated directly from the GNSS measurements is introduced, based on Bernoulli-Laplacian priors, promoting sparsity of MP biases.
Journal ArticleDOI
Communication and computation efficiency in Federated Learning: A survey
Omair Rashed Abdulwareth Almanifi,Chee-Onn Chow,Mau-Luen Tham,Joon Huang Chuah,Jeevan Kanesan +4 more
TL;DR: In this paper , a systematic review of recent work conducted to improve the communication and/or computation efficiency in federated learning is presented, followed by the literature review placed according to an encompassing, easy-to-follow taxonomy.
Posted Content
Joint Frame Synchronization and Channel Estimation: Sparse Recovery Approach and USRP Implementation
TL;DR: This paper proposes several JFSCE methods using popular sparse signal recovery algorithms which exploit the sparsity of the combined channel vector and the sparse channel vector estimate is used to design a sparse equalizer.
Proceedings ArticleDOI
Sparse Signal Recovery Using MPDR Estimation
TL;DR: A new perspective is presented on the sparse Bayesian learning (SBL) algorithm used in sparse signal recovery using the array processing minimum power distortionless response (MPDR) beamformer framework and the connection between the MPDR and the LMMSE estimator is used to lower the complexity of the algorithm.
Posted Content
Communication-Efficient Federated Learning via Quantized Compressed Sensing.
TL;DR: In this article, the authors proposed a federated learning framework for quantized compressed sensing, which consists of gradient compression for wireless devices and gradient reconstruction for a parameter server (PS) using the expectation-maximization generalized-approximate-message-passing (EM-GAMP) algorithm.
References
More filters
Journal ArticleDOI
Maximum likelihood from incomplete data via the EM algorithm
Journal ArticleDOI
Regression Shrinkage and Selection via the Lasso
TL;DR: A new method for estimation in linear models called the lasso, which minimizes the residual sum of squares subject to the sum of the absolute value of the coefficients being less than a constant, is proposed.
Journal ArticleDOI
A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
Amir Beck,Marc Teboulle +1 more
TL;DR: A new fast iterative shrinkage-thresholding algorithm (FISTA) which preserves the computational simplicity of ISTA but with a global rate of convergence which is proven to be significantly better, both theoretically and practically.
Journal ArticleDOI
Atomic Decomposition by Basis Pursuit
TL;DR: Basis Pursuit (BP) is a principle for decomposing a signal into an "optimal" superposition of dictionary elements, where optimal means having the smallest l1 norm of coefficients among all such decompositions.
Journal ArticleDOI
Sparse bayesian learning and the relevance vector machine
TL;DR: It is demonstrated that by exploiting a probabilistic Bayesian learning framework, the 'relevance vector machine' (RVM) can derive accurate prediction models which typically utilise dramatically fewer basis functions than a comparable SVM while offering a number of additional advantages.