scispace - formally typeset
Search or ask a question
Topic

Expectation–maximization algorithm

About: Expectation–maximization algorithm is a research topic. Over the lifetime, 11823 publications have been published within this topic receiving 528693 citations. The topic is also known as: EM algorithm & Expectation Maximization.


Papers
More filters
Journal ArticleDOI
TL;DR: In this article, the authors investigated the semiparametric inference of the simple Gamma-process model and a random effects variant, where the maximum likelihood estimates of the parameters were obtained through the EM algorithm and the bootstrap was used to construct confidence intervals.
Abstract: This article investigates the semiparametric inference of the simple Gamma-process model and a random-effects variant. Maximum likelihood estimates of the parameters are obtained through the EM algorithm. The bootstrap is used to construct confidence intervals. A simulation study reveals that an estimation based on the full likelihood method is more efficient than the pseudo likelihood method. In addition, a score test is developed to examine the existence of random effects under the semiparametric scenario. A comparison study using a fatigue-crack growth dataset shows that performance of a semiparametric estimation is comparable to the parametric counterpart. This article has supplementary material online.

85 citations

Journal ArticleDOI
TL;DR: A novel method is presented for Polarimetric Synthetic Aperture Radar image segmentation, in which there is no need for any parameter initialization and the results prove the superiority of the proposed method as it improves both the performance and the noise resistance.

85 citations

Journal ArticleDOI
TL;DR: In this paper, the authors investigated the use of a probabilistic model for unsupervised clustering in text collections, which consists of a mixture of multinomial distributions over the word counts, each component corresponding to a different theme.
Abstract: In this article, we investigate the use of a probabilistic model for unsupervised clustering in text collections. Unsupervised clustering has become a basic module for many intelligent text processing applications, such as information retrieval, text classification or information extraction. Recent proposals have been made of probabilistic clustering models, which build ''soft'' theme-document associations. These models allow to compute, for each document, a probability vector whose values can be interpreted as the strength of the association between documents and clusters. As such, these vectors can also serve to project texts into a lower-dimensional ''semantic'' space. These models however pose non-trivial estimation problems, which are aggravated by the very high dimensionality of the parameter space. The model considered in this paper consists of a mixture of multinomial distributions over the word counts, each component corresponding to a different theme. We propose a systematic evaluation framework to contrast various estimation procedures for this model. Starting with the expectation-maximization (EM) algorithm as the basic tool for inference, we discuss the importance of initialization and the influence of other features, such as the smoothing strategy or the size of the vocabulary, thereby illustrating the difficulties incurred by the high dimensionality of the parameter space. We empirically show that, in the case of text processing, these difficulties can be alleviated by introducing the vocabulary incrementally, due to the specific profile of the word count distributions. Using the fact that the model parameters can be analytically integrated out, we finally show that Gibbs sampling on the theme configurations is tractable and compares favorably to the basic EM approach.

85 citations

Journal ArticleDOI
TL;DR: Modifications to her original acceleration algorithm are introduced, which involve extensions in considering truncated data and an alternative way of implementing the search for an optimal step size.
Abstract: Maximum-likelihood image restoration for noncoherent imagery, which is based on the generic expectation maximization (EM) algorithm of Dempster et al. [ J. R. Stat. Soc. B39, 1 ( 1977)], is an iterative method whose convergence can be slow. We discuss an accelerative version of this algorithm. The EM algorithm is interpreted as a hill-climbing technique in which each iteration takes a step up the likelihood functional. The basic principle of the acceleration technique presented is to provide larger steps in the same vector direction and to find some optimal step size. This basic line-search principle is adapted from the research of Kaufman [ IEEE Trans. Med. Imag.MI-6, 37 ( 1987)]. Modifications to her original acceleration algorithm are introduced, which involve extensions in considering truncated data and an alternative way of implementing the search for an optimal step size. Log-likelihood calculations and reconstructed images from simulations show the execution time’s being shortened from the nonaccelerated algorithm by approximately a factor of 7.

85 citations


Network Information
Related Topics (5)
Estimator
97.3K papers, 2.6M citations
91% related
Deep learning
79.8K papers, 2.1M citations
84% related
Support vector machine
73.6K papers, 1.7M citations
84% related
Cluster analysis
146.5K papers, 2.9M citations
84% related
Artificial neural network
207K papers, 4.5M citations
82% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023114
2022245
2021438
2020410
2019484
2018519