scispace - formally typeset
Search or ask a question
Topic

Expectation–maximization algorithm

About: Expectation–maximization algorithm is a research topic. Over the lifetime, 11823 publications have been published within this topic receiving 528693 citations. The topic is also known as: EM algorithm & Expectation Maximization.


Papers
More filters
Journal Article
TL;DR: An extended EM algorithm is used to minimize the information divergence (maximize the relative entropy) in the density approximation case and fits to Weibull, log normal, and Erlang distributions are used as illustrations of the latter.
Abstract: Estimation from sample data and density approximation with phase-type distribu- tions are considered. Maximum likelihood estimation via the EM algorithm is discussed and performed for some data sets. An extended EM algorithm is used to minimize the information divergence (maximize the relative entropy) in the density approximation case. Fits to Weibull, log normal, and Erlang distributions are used as illustrations of the latter.

690 citations

Journal ArticleDOI
TL;DR: A generalized expectation-maximization (GEM) algorithm is developed for Bayesian reconstruction, based on locally correlated Markov random-field priors in the form of Gibbs functions and on the Poisson data model, which reduces to the EM maximum-likelihood algorithm.
Abstract: A generalized expectation-maximization (GEM) algorithm is developed for Bayesian reconstruction, based on locally correlated Markov random-field priors in the form of Gibbs functions and on the Poisson data model. For the M-step of the algorithm, a form of coordinate gradient ascent is derived. The algorithm reduces to the EM maximum-likelihood algorithm as the Markov random-field prior tends towards a uniform distribution. Three different Gibbs function priors are examined. Reconstructions of 3-D images obtained from the Poisson model of single-photon-emission computed tomography are presented. >

674 citations

Journal ArticleDOI
TL;DR: In this paper, the existence, support size, likelihood equations, and uniqueness of the estimator are revealed to be directly related to the properties of the convex hull of the likelihood set and the support hyperplanes of that hull.
Abstract: In this paper certain fundamental properties of the maximum likelihood estimator of a mixing distribution are shown to be geometric properties of the likelihood set. The existence, support size, likelihood equations, and uniqueness of the estimator are revealed to be directly related to the properties of the convex hull of the likelihood set and the support hyperplanes of that hull. It is shown using geometric techniques that the estimator exists under quite general conditions, with a support size no larger than the number of distinct observations. Analysis of the convex dual of the likelihood set leads to a dual maximization problem. A convergent algorithm is described. The defining equations for the estimator are compared with the usual parametric likelihood equations for finite mixtures. Sufficient conditions for uniqueness are given. Part II will deal with a special theory for exponential family mixtures.

674 citations

Journal ArticleDOI
TL;DR: FlexMix implements a general framework for fitting discrete mixtures of regression models in the R statistical computing environment and provides the E-step and all data handling, while the M-step can be supplied by the user to easily define new models.
Abstract: FlexMix implements a general framework for fitting discrete mixtures of regression models in the R statistical computing environment: three variants of the EM algorithm can be used for parameter estimation, regressors and responses may be multivariate with arbitrary dimension, data may be grouped, e.g., to account for multiple observations per individual, the usual formula interface of the S language is used for convenient model specification, and a modular concept of driver functions allows to interface many dierent types of regression models. Existing drivers implement mixtures of standard linear models, generalized linear models and model-based clustering. FlexMix provides the E-step and all data handling, while the M-step can be supplied by the user to easily define new models.

659 citations


Network Information
Related Topics (5)
Estimator
97.3K papers, 2.6M citations
91% related
Deep learning
79.8K papers, 2.1M citations
84% related
Support vector machine
73.6K papers, 1.7M citations
84% related
Cluster analysis
146.5K papers, 2.9M citations
84% related
Artificial neural network
207K papers, 4.5M citations
82% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023114
2022245
2021438
2020410
2019484
2018519