scispace - formally typeset
Search or ask a question
Topic

Expectation–maximization algorithm

About: Expectation–maximization algorithm is a research topic. Over the lifetime, 11823 publications have been published within this topic receiving 528693 citations. The topic is also known as: EM algorithm & Expectation Maximization.


Papers
More filters
Journal ArticleDOI
TL;DR: An iterative Bayesian reconstruction algorithm for limited view angle tomography, or ectomography, based on the three-dimensional total variation (TV) norm has been developed and has been shown to improve the perceived image quality.
Abstract: An iterative Bayesian reconstruction algorithm for limited view angle tomography, or ectomography, based on the three-dimensional total variation (TV) norm has been developed. The TV norm has been described in the literature as a method for reducing noise in two-dimensional images while preserving edges, without introducing ringing or edge artefacts. It has also been proposed as a 2D regularization function in Bayesian reconstruction, implemented in an expectation maximization algorithm (TV-EM). The TV-EM was developed for 2D single photon emission computed tomography imaging, and the algorithm is capable of smoothing noise while maintaining edges without introducing artefacts. The TV norm was extended from 2D to 3D and incorporated into an ordered subsets expectation maximization algorithm for limited view angle geometry. The algorithm, called TV3D-EM, was evaluated using a modelled point spread function and digital phantoms. Reconstructed images were compared with those reconstructed with the 2D filtered backprojection algorithm currently used in ectomography. Results show a substantial reduction in artefacts related to the limited view angle geometry, and noise levels were also improved. Perhaps most important, depth resolution was improved by at least 45%. In conclusion, the proposed algorithm has been shown to improve the perceived image quality.

168 citations

Journal ArticleDOI
Dongbing Gu1
TL;DR: It is shown that the distributed EM algorithm is a stochastic approximation to the standard EM algorithm and converges to a local maximum of the log-likelihood.
Abstract: This paper presents a distributed expectation-maximization (EM) algorithm over sensor networks. In the E-step of this algorithm, each sensor node independently calculates local sufficient statistics by using local observations. A consensus filter is used to diffuse local sufficient statistics to neighbors and estimate global sufficient statistics in each node. By using this consensus filter, each node can gradually diffuse its local information over the entire network and asymptotically the estimate of global sufficient statistics is obtained. In the M-step of this algorithm, each sensor node uses the estimated global sufficient statistics to update model parameters of the Gaussian mixtures, which can maximize the log-likelihood in the same way as in the standard EM algorithm. Because the consensus filter only requires that each node communicate with its neighbors, the distributed EM algorithm is scalable and robust. It is also shown that the distributed EM algorithm is a stochastic approximation to the standard EM algorithm. Thus, it converges to a local maximum of the log-likelihood. Several simulations of sensor networks are given to verify the proposed algorithm.

167 citations

Proceedings ArticleDOI
01 Nov 2011
TL;DR: This paper embeds the BG-AMP algorithm within an expectation-maximization (EM) framework, and simultaneously reconstruct the signal while learning the prior signal and noise parameters, and achieves excellent performance on a range of signal types.
Abstract: The approximate message passing (AMP) algorithm originally proposed by Donoho, Maleki, and Montanari yields a computationally attractive solution to the usual l 1 -regularized least-squares problem faced in compressed sensing, whose solution is known to be robust to the signal distribution When the signal is drawn iid from a marginal distribution that is not least-favorable, better performance can be attained using a Bayesian variation of AMP The latter, however, assumes that the distribution is perfectly known In this paper, we navigate the space between these two extremes by modeling the signal as iid Bernoulli-Gaussian (BG) with unknown prior sparsity, mean, and variance, and the noise as zero-mean Gaussian with unknown variance, and we simultaneously reconstruct the signal while learning the prior signal and noise parameters To accomplish this task, we embed the BG-AMP algorithm within an expectation-maximization (EM) framework Numerical experiments confirm the excellent performance of our proposed EM-BG-AMP on a range of signal types12

167 citations

ReportDOI
01 Jan 1992
TL;DR: It is shown that, for some particular mixture situations, the SEM algorithm is almost always preferable to the EM and simulated annealing versions SAEM and MCEM, and the SEM stationary distribution provides a contrasted view of the loglikelihood by emphasizing sensible maxima.
Abstract: We compare three different stochastic versions of the EM algorithm: The SEM algorithm, the SAEM algorithm and the MCEM algorithm. We suggest that the most relevant contribution of the MCEM methodology is what we call the simulated annealing MCEM algorithm, which turns out to be very close to SAEM. We focus particularly on the mixture of distributions problem. In this context, we review the available theoretical results on the convergence of these algorithms and on the behavior of SEM as the sample size tends to infinity. The second part is devoted to intensive Monte Carlo numerical simulations and a real data study. We show that, for some particular mixture situations, the SEM algorithm is almost always preferable to the EM and simulated annealing versions SAEM and MCEM. For some very intricate mixtures, however, none of these algorithms can be confidently used. Then, SEM can be used as an efficient data exploratory tool for locating significant maxima of the likelihood function. In the real data case, we show that the SEM stationary distribution provides a contrasted view of the loglikelihood by emphasizing sensible maxima.

167 citations

Journal ArticleDOI
TL;DR: In this article, the authors considered the estimation of the stress strength parameter R = P (Y X ), when X and Y are independent and both are three-parameter Weibull distributions with the common shape and location parameters but different scale parameters.

167 citations


Network Information
Related Topics (5)
Estimator
97.3K papers, 2.6M citations
91% related
Deep learning
79.8K papers, 2.1M citations
84% related
Support vector machine
73.6K papers, 1.7M citations
84% related
Cluster analysis
146.5K papers, 2.9M citations
84% related
Artificial neural network
207K papers, 4.5M citations
82% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023114
2022245
2021438
2020410
2019484
2018519