scispace - formally typeset
Search or ask a question
Topic

Expectation–maximization algorithm

About: Expectation–maximization algorithm is a research topic. Over the lifetime, 11823 publications have been published within this topic receiving 528693 citations. The topic is also known as: EM algorithm & Expectation Maximization.


Papers
More filters
Proceedings Article
09 Dec 2003
TL;DR: The resulting IM algorithm is analagous to the EM algorithm, yet maximises mutual information, as opposed to likelihood, in the maximisation of information transmission over noisy channels.
Abstract: The maximisation of information transmission over noisy channels is a common, albeit generally computationally difficult problem. We approach the difficulty of computing the mutual information for noisy channels by using a variational approximation. The resulting IM algorithm is analagous to the EM algorithm, yet maximises mutual information, as opposed to likelihood. We apply the method to several practical examples, including linear compression, population encoding and CDMA.

358 citations

Journal ArticleDOI
TL;DR: The original method, based on the EM algorithm, is shown to be superior to the standard one for a priori probability estimation and always performs better than the original one in terms of classification accuracy, when the a Priori probability conditions differ from the training set to the real-world data.
Abstract: It sometimes happens (for instance in case control studies) that a classifier is trained on a data set that does not reflect the true a priori probabilities of the target classes on real-world data. This may have a negative effect on the classification accuracy obtained on the real-world data set, especially when the classifier's decisions are based on the a posteriori probabilities of class membership. Indeed, in this case, the trained classifier provides estimates of the a posteriori probabilities that are not valid for this real-world data set (they rely on the a priori probabilities of the training set). Applying the classifier as is (without correcting its outputs with respect to these new conditions) on this new data set may thus be suboptimal. In this note, we present a simple iterative procedure for adjusting the outputs of the trained classifier with respect to these new a priori probabilities without having to refit the model, even when these probabilities are not known in advance. As a by-product, estimates of the new a priori probabilities are also obtained. This iterative algorithm is a straightforward instance of the expectation-maximization (EM) algorithm and is shown to maximize the likelihood of the new data. Thereafter, we discuss a statistical test that can be applied to decide if the a priori class probabilities have changed from the training set to the real-world data. The procedure is illustrated on different classification problems involving a multilayer neural network, and comparisons with a standard procedure for a priori probability estimation are provided. Our original method, based on the EM algorithm, is shown to be superior to the standard one for a priori probability estimation. Experimental results also indicate that the classifier with adjusted outputs always performs better than the original one in terms of classification accuracy, when the a priori probability conditions differ from the training set to the real-world data. The gain in classification accuracy can be significant.

356 citations

Journal ArticleDOI
TL;DR: An easy and efficient sparse-representation-based iterative algorithm for image inpainting that allows a high degree of flexibility to recover different structural components in the image (piecewise smooth, curvilinear, texture, etc.).
Abstract: Representing the image to be inpainted in an appropriate sparse representation dictionary, and combining elements from Bayesian statistics and modern harmonic analysis, we introduce an expectation maximization (EM) algorithm for image inpainting and interpolation. From a statistical point of view, the inpainting/interpolation can be viewed as an estimation problem with missing data. Toward this goal, we propose the idea of using the EM mechanism in a Bayesian framework, where a sparsity promoting prior penalty is imposed on the reconstructed coefficients. The EM framework gives a principled way to establish formally the idea that missing samples can be recovered/interpolated based on sparse representations. We first introduce an easy and efficient sparse-representation-based iterative algorithm for image inpainting. Additionally, we derive its theoretical convergence properties. Compared to its competitors, this algorithm allows a high degree of flexibility to recover different structural components in the image (piecewise smooth, curvilinear, texture, etc.). We also suggest some guidelines to automatically tune the regularization parameter.

350 citations

Journal Article
TL;DR: A general overview of analytic and iterative methods of reconstruction in SPECT is presented with a special focus on filter backprojection (FBP), conjugate gradient, maximum likelihood expectation maximization, and maximum a posteriori expectation maximizations algorithms.
Abstract: Images of the inside of the human body can be obtained noninvasively using tomographic acquisition and processing techniques. In particular, these techniques are commonly used to obtain images of a γ-emitter distribution after its administration in the human body. The reconstructed images are obtained given a set of their projections, acquired using rotating gamma cameras. A general overview of analytic and iterative methods of reconstruction in SPECT is presented with a special focus on filter backprojection (FBP), conjugate gradient, maximum likelihood expectation maximization, and maximum a posteriori expectation maximization algorithms. The FBP algorithm is faster than iterative algorithms, with the latter providing a framework for accurately modeling the emission and detection processes.

348 citations


Network Information
Related Topics (5)
Estimator
97.3K papers, 2.6M citations
91% related
Deep learning
79.8K papers, 2.1M citations
84% related
Support vector machine
73.6K papers, 1.7M citations
84% related
Cluster analysis
146.5K papers, 2.9M citations
84% related
Artificial neural network
207K papers, 4.5M citations
82% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023114
2022245
2021438
2020410
2019484
2018519