scispace - formally typeset
Search or ask a question
Topic

Expectation–maximization algorithm

About: Expectation–maximization algorithm is a research topic. Over the lifetime, 11823 publications have been published within this topic receiving 528693 citations. The topic is also known as: EM algorithm & Expectation Maximization.


Papers
More filters
Journal ArticleDOI
TL;DR: The bootstrap is discussed as a method to assess the uncertainty of the maximum likelihood estimate and to construct confidence intervals for functions of the transition matrix such as expected survival.
Abstract: Discrete-time Markov chains have been successfully used to investigate treatment programs and health care protocols for chronic diseases. In these situations, the transition matrix, which describes the natural progression of the disease, is often estimated from a cohort observed at common intervals. Estimation of the matrix, however, is often complicated by the complex relationship among transition probabilities. This paper summarizes methods to obtain the maximum likelihood estimate of the transition matrix when the cycle length of the model coincides with the observation interval, the cycle length does not coincide with the observation interval, and when the observation intervals are unequal in length. In addition, the bootstrap is discussed as a method to assess the uncertainty of the maximum likelihood estimate and to construct confidence intervals for functions of the transition matrix such as expected survival. Copyright © 2002 John Wiley & Sons, Ltd.

157 citations

Journal ArticleDOI
TL;DR: SAEM is an adaptation of the stochastic EM algorithm that overcomes most of the well-known limitations of EM and appears to be more tractable than SEM, since it provides almost sure convergence, while SEM provides convergence in distribution.
Abstract: The EM algorithm is a widely applicable approach for computing maximum likelihood estimates for incomplete data. We present a stochastic approximation type EM algorithm: SAEM. This algorithm is an adaptation of the stochastic EM algorithm (SEM) that we have previously developed. Like SEM, SAEM overcomes most of the well-known limitations of EM. Moreover, SAEM performs better for small samples. Furthermore, SAEM appears to be more tractable than SEM, since it provides almost sure convergence, while SEM provides convergence in distribution. Here, we restrict attention on the mixture problem. We state a theorem which asserts that each SAEM sequence converges a.s. to a local maximizer of the likelihood function. We close this paper with a comparative study, based on numerical simulations, of these three algorithms.

157 citations

Proceedings Article
01 Dec 1998
TL;DR: This paper proposes an annealed version of the standard EM algorithm for model fitting which is empirically evaluated on a variety of data sets from different domains.
Abstract: Dyadzc data refers to a domain with two finite sets of objects in which observations are made for dyads, i.e., pairs with one element from either set. This type of data arises naturally in many application ranging from computational linguistics and information retrieval to preference analysis and computer vision. In this paper, we present a systematic, domain-independent framework of learning from dyadic data by statistical mixture models. Our approach covers different models with fiat and hierarchical latent class structures. We propose an annealed version of the standard EM algorithm for model fitting which is empirically evaluated on a variety of data sets from different domains.

157 citations

Journal ArticleDOI
01 Aug 1992
TL;DR: The accuracy in determining the number of image classes using AIC and MDL is compared and the MDL criterion performed better than the AIC criterion, and a modified MDL showed further improvement.
Abstract: A method for parameter estimation in image classification or segmentation is studied within the statistical frame of finite mixture distributions. The method models an image as a finite mixture. Each mixture component corresponds to an image class. Each image class is characterized by parameters, such as the intensity mean, the standard deviation, and the number of image pixels in that class. The method uses a maximum likelihood (ML) approach to estimate the parameters of each class and employs information criteria of Akaike (AIC) and/or Schwarz and Rissanen (MDL) to determine the number of classes in the image. In computing the ML solution of the mixture, the method adopts the expectation maximization (EM) algorithm. The initial estimation and convergence of the ML-EM algorithm were studied. The accuracy in determining the number of image classes using AIC and MDL is compared. The MDL criterion performed better than the AIC criterion. A modified MDL showed further improvement. >

157 citations

Journal ArticleDOI
TL;DR: In this article, the iterative least-squares procedure for estimating the parameters in a general multilevel random coefficients linear model can be modified to produce unbiased estimates of the random parameters.
Abstract: SUMMARY It is shown that the iterative least-squares procedure for estimating the parameters in a general multilevel random coefficients linear model can be modified to produce unbiased estimates of the random parameters. In the multivariate normal case these are equivalent to restricted maximum likelihood estimates.

156 citations


Network Information
Related Topics (5)
Estimator
97.3K papers, 2.6M citations
91% related
Deep learning
79.8K papers, 2.1M citations
84% related
Support vector machine
73.6K papers, 1.7M citations
84% related
Cluster analysis
146.5K papers, 2.9M citations
84% related
Artificial neural network
207K papers, 4.5M citations
82% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023114
2022245
2021438
2020410
2019484
2018519