scispace - formally typeset
Search or ask a question
Topic

Expectation–maximization algorithm

About: Expectation–maximization algorithm is a research topic. Over the lifetime, 11823 publications have been published within this topic receiving 528693 citations. The topic is also known as: EM algorithm & Expectation Maximization.


Papers
More filters
Journal ArticleDOI
TL;DR: Monte Carlo methods are proposed, which build on recent advances on the exact simulation of diffusions, for performing maximum likelihood and Bayesian estimation for discretely observed diffusions.
Abstract: Summary. The objective of the paper is to present a novel methodology for likelihood-based inference for discretely observed diffusions. We propose Monte Carlo methods, which build on recent advances on the exact simulation of diffusions, for performing maximum likelihood and Bayesian estimation.

423 citations

Journal ArticleDOI
01 Dec 1998
TL;DR: A split-and-merge expectation-maximization algorithm to overcome the local maxima problem in parameter estimation of finite mixture models and is applied to the training of gaussian mixtures and mixtures of factor analyzers and shows the practical usefulness by applying it to image compression and pattern recognition problems.
Abstract: We present a split-and-merge expectation-maximization (SMEM) algorithm to overcome the local maxima problem in parameter estimation of finite mixture models. In the case of mixture models, local maxima often involve having too many components of a mixture model in one part of the space and too few in another, widely separated part of the space. To escape from such configurations, we repeatedly perform simultaneous split-and-merge operations using a new criterion for efficiently selecting the split-and-merge candidates. We apply the proposed algorithm to the training of gaussian mixtures and mixtures of factor analyzers using synthetic and real data and show the effectiveness of using the split- and-merge operations to improve the likelihood of both the training data and of held-out test data. We also show the practical usefulness of the proposed algorithm by applying it to image compression and pattern recognition problems.

422 citations

Journal ArticleDOI
TL;DR: In experiments with synthetic noise-free and additive noisy projection data of dental phantoms, it is found that both simultaneous iterative algorithms produce superior image quality as compared to filtered backprojection after linearly fitting projection gaps.
Abstract: Iterative deblurring methods using the expectation maximization (EM) formulation and the algebraic reconstruction technique (ART), respectively, are adapted for metal artifact reduction in medical computed tomography (CT). In experiments with synthetic noise-free and additive noisy projection data of dental phantoms, it is found that both simultaneous iterative algorithms produce superior image quality as compared to filtered backprojection after linearly fitting projection gaps. Furthermore, the EM-type algorithm converges faster than the ART-type algorithm in terms of either the I-divergence or Euclidean distance between ideal and reprojected data in the authors' simulation. Also, for a given iteration number, the EM-type deblurring method produces better image clarity but stronger noise than the ART-type reconstruction. The computational complexity of EM- and ART-based iterative deblurring is essentially the same, dominated by reprojection and backprojection. Relevant practical and theoretical issues are discussed.

419 citations

Journal ArticleDOI
TL;DR: By combining sequential model selection procedures, the online VB method provides a fully online learning method with a model selection mechanism and was able to adapt the model structure to dynamic environments.
Abstract: The Bayesian framework provides a principled way of model selection. This framework estimates a probability distribution over an ensemble of models, and the prediction is done by averaging over the ensemble of models. Accordingly, the uncertainty of the models is taken into account, and complex models with more degrees of freedom are penalized. However, integration over model parameters is often intractable, and some approximation scheme is needed. Recently, a powerful approximation scheme, called the variational bayes (VB) method, has been proposed. This approach defines the free energy for a trial probability distribution, which approximates a joint posterior probability distribution over model parameters and hidden variables. The exact maximization of the free energy gives the true posterior distribution. The VB method uses factorized trial distributions. The integration over model parameters can be done analytically, and an iterative expectation-maximization-like algorithm, whose convergence is guaranteed, is derived. In this article, we derive an online version of the VB algorithm and prove its convergence by showing that it is a stochastic approximation for finding the maximum of the free energy. By combining sequential model selection procedures, the online VB method provides a fully online learning method with a model selection mechanism. In preliminary experiments using synthetic data, the online VB method was able to adapt the model structure to dynamic environments.

415 citations

Journal ArticleDOI
TL;DR: Experimental results show that the proposed speckle reduction algorithm outperforms standard wavelet denoising techniques in terms of the signal-to-noise ratio and the equivalent-number-of-looks measures in most cases and achieves better performance than the refined Lee filter.
Abstract: The granular appearance of speckle noise in synthetic aperture radar (SAR) imagery makes it very difficult to visually and automatically interpret SAR data. Therefore, speckle reduction is a prerequisite for many SAR image processing tasks. In this paper, we develop a speckle reduction algorithm by fusing the wavelet Bayesian denoising technique with Markov-random-field-based image regularization. Wavelet coefficients are modeled independently and identically by a two-state Gaussian mixture model, while their spatial dependence is characterized by a Markov random field imposed on the hidden state of Gaussian mixtures. The Expectation-Maximization algorithm is used to estimate hyperparameters and specify the mixture model, and the iterated-conditional-modes method is implemented to optimize the state configuration. The noise-free wavelet coefficients are finally estimated by a shrinkage function based on local weighted averaging of the Bayesian estimator. Experimental results show that the proposed method outperforms standard wavelet denoising techniques in terms of the signal-to-noise ratio and the equivalent-number-of-looks measures in most cases. It also achieves better performance than the refined Lee filter.

414 citations


Network Information
Related Topics (5)
Estimator
97.3K papers, 2.6M citations
91% related
Deep learning
79.8K papers, 2.1M citations
84% related
Support vector machine
73.6K papers, 1.7M citations
84% related
Cluster analysis
146.5K papers, 2.9M citations
84% related
Artificial neural network
207K papers, 4.5M citations
82% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023114
2022245
2021438
2020410
2019484
2018519