scispace - formally typeset
Search or ask a question
Topic

Expectation–maximization algorithm

About: Expectation–maximization algorithm is a research topic. Over the lifetime, 11823 publications have been published within this topic receiving 528693 citations. The topic is also known as: EM algorithm & Expectation Maximization.


Papers
More filters
Book Chapter
01 Jan 2011
TL;DR: The maximum likelihood theory for incomplete data from an exponential family through the EM algorithm and the geometry of exponential families are studied.
Abstract: E Barndorff-Nielsen OE, Cox DR () Inference and asymptotics. Chapman & Hall, London Brazzale AR, Davison AC, Reid N () Applied asymptotics: case studies in small-sample statistics. Cambridge University Press, Cambridge Dempster AP, Laird NM, Rubin DB () Maximum likelihood from incomplete data via the EM algorithm (with discussion). J Roy Stat Soc B :– Efron B () The geometry of exponential families. Ann Stat :– Sundberg R () Maximum likelihood theory for incomplete data from an exponential family. Scand J Stat :–

105 citations

Book ChapterDOI
01 Jan 2008
TL;DR: A generative model based on the gaussian mixture model and gaussian processes allows the representation of smooth trajectories and avoids discretization problems found in most existing methods.
Abstract: A generative model based on the gaussian mixture model and gaussian processes is presented in this paper. Typical motion paths are learnt and then used for motion prediction using this model. The principal novel aspect of this approach is the modelling of paths using gaussian processes. It allows the representation of smooth trajectories and avoids discretization problems found in most existing methods. Gaussian processes not only provides a comprehensive and formal theoretical framework to work with, it also lends itself naturally to path clustering using gaussian mixture models. Learning is performed using expectation maximization where the E-Step uses variational methods to maximize its lower bound before optimization over parameters are performed in the M-Step.

104 citations

Proceedings Article
01 Aug 1997
TL;DR: In this paper, a unified framework for parameter estimation in Bayesian networks with missing values and hidden variables is proposed, where the model is continuously adapted to new data cases as they arrive, and the more traditional batch learning, where a pre-accumulated set of samples is used in a one-time model selection process.
Abstract: This paper re-examines the problem of parameter estimation in Bayesian networks with missing values and hidden variables from the perspective of recent work in on-line learning [13]. We provide a unified framework for parameter estimation that encompasses both on-line learning, where the model is continuously adapted to new data cases as they arrive, and the more traditional batch learning, where a pre-accumulated set of samples is used in a one-time model selection process. In the batch case, our framework encompasses both the gradient projection algorithm [2, 3] and the EM algorithm [15] for Bayesian networks. The framework also leads to new on-line and batch parameter update schemes, including a parameterized version of EM. We provide both empirical and theoretical results indicating that parameterized EM allows faster convergence to the maximum likelihood parameters than does standard EM.

104 citations

Journal ArticleDOI
TL;DR: The estimation of the parameters of a mixture of Gaussian densities is considered, within the framework of maximum likelihood, and a solution to likelihood function degeneracy which consists in penalizing the likelihood function is adopted.
Abstract: The estimation of the parameters of a mixture of Gaussian densities is considered, within the framework of maximum likelihood Due to unboundedness of the likelihood function, the maximum likelihood estimator fails to exist We adopt a solution to likelihood function degeneracy which consists in penalizing the likelihood function The resulting penalized likelihood function is then bounded over the parameter space and the existence of the penalized maximum likelihood estimator is granted As original contribution we provide asymptotic properties, and in particular a consistency proof, for the penalized maximum likelihood estimator Numerical examples are provided in the finite data case, showing the performances of the penalized estimator compared to the standard one

104 citations

Journal ArticleDOI
TL;DR: Numerical results show that the sparsity regularization and expectation-maximization algorithm used in this paper provides better resolution than the Tikhonov-typeRegularization and is also efficient in estimating two closely spaced abnormalities.
Abstract: We present an image reconstruction method for diffuse optical tomography (DOT) by using the sparsity regularization and expectation-maximization (EM) algorithm. Typical image reconstruction approaches in DOT employ Tikhonov-type regularization, which imposes restrictions on the L(2) norm of the optical properties (absorption/scattering coefficients). It tends to cause a blurring effect in the reconstructed image and works best when the unknown parameters follow a Gaussian distribution. In reality, the abnormality is often localized in space. Therefore, the vector corresponding to the change of the optical properties compared with the background would be sparse with only a few elements being nonzero. To incorporate this information and improve the performance, we propose an image reconstruction method by regularizing the L(1) norm of the unknown parameters and solve it iteratively using the expectation-maximization algorithm. We verify our method using simulated 3D examples and compare the reconstruction performance of our approach with the level-set algorithm, Tikhonov regularization, and simultaneous iterative reconstruction technique (SIRT). Numerical results show that our method provides better resolution than the Tikhonov-type regularization and is also efficient in estimating two closely spaced abnormalities.

104 citations


Network Information
Related Topics (5)
Estimator
97.3K papers, 2.6M citations
91% related
Deep learning
79.8K papers, 2.1M citations
84% related
Support vector machine
73.6K papers, 1.7M citations
84% related
Cluster analysis
146.5K papers, 2.9M citations
84% related
Artificial neural network
207K papers, 4.5M citations
82% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023114
2022245
2021438
2020410
2019484
2018519