scispace - formally typeset
Search or ask a question
Topic

Expectation–maximization algorithm

About: Expectation–maximization algorithm is a research topic. Over the lifetime, 11823 publications have been published within this topic receiving 528693 citations. The topic is also known as: EM algorithm & Expectation Maximization.


Papers
More filters
Journal ArticleDOI
TL;DR: In this article, the maximum likelihood estimators of distribution parameters subject to certain order relations are determined for simultaneous sampling from a number of populations, when the order relations may be specified by regarding the distribution parameters, of which one is associated with each population.
Abstract: The maximum likelihood estimators of distribution parameters subject to certain order relations are determined for simultaneous sampling from a number of populations, when $(i)$ the order relations may be specified by regarding the distribution parameters, of which one is associated with each population, as values at specified points of a function of $n$ variables ($n$ a positive integer), monotone in each variable separately; (ii) the distributions of the populations from which sample values are taken belong to the exponential family defined below. This family includes, in particular, the binomial, the normal with fixed standard deviation and variable mean, the normal with fixed mean and variable standard deviation, and the Poisson distributions.

317 citations

Proceedings ArticleDOI
06 Jul 2001
TL;DR: This paper presents the first algorithm that provably learns the component gaussians in time that is polynomial in the dimension.
Abstract: Mixtures of gaussian (or normal) distributions arise in a variety of application areas. Many techniques have been proposed for the task of finding the component gaussians given samples from the mixture, such as the EM algorithm, a local-search heuristic from Dempster, Laird and Rubin~(1977). However, such heuristics are known to require time exponential in the dimension (i.e., number of variables) in the worst case, even when the number of components is $2$.This paper presents the first algorithm that provably learns the component gaussians in time that is polynomial in the dimension. The gaussians may have arbitrary shape provided they satisfy a “nondegeneracy” condition, which requires their high-probability regions to be not “too close” together.

316 citations

Journal ArticleDOI
TL;DR: It is shown that the on-line EM algorithm is equivalent to the batch EM algorithm if a specific scheduling of the discount factor is employed and can be considered as a stochastic approximation method to find the maximum likelihood estimator.
Abstract: A normalized gaussian network (NGnet) (Moody & Darken, 1989) is a network of local linear regression units. The model softly partitions the input space by normalized gaussian functions, and each local unit linearly approximates the output within the partition. In this article, we propose a new on-line EM algorithm for the NGnet, which is derived from the batch EM algorithm (Xu, Jordan, & Hinton 1995), by introducing a discount factor. We show that the on-line EM algorithm is equivalent to the batch EM algorithm if a specific scheduling of the discount factor is employed. In addition, we show that the on-line EM algorithm can be considered as a stochastic approximation method to find the maximum likelihood estimator. A new regularization method is proposed in order to deal with a singular input distribution. In order to manage dynamic environments, where the input-output distribution of data changes over time, unit manipulation mechanisms such as unit production, unit deletion, and unit division are also introduced based on probabilistic interpretation. Experimental results show that our approach is suitable for function approximation problems in dynamic environments. We also apply our on-line EM algorithm to robot dynamics problems and compare our algorithm with the mixtures-of-experts family.

315 citations

Journal ArticleDOI
TL;DR: The method is demonstrated using an input-state-output model of the hemodynamic coupling between experimentally designed causes or factors in fMRI studies and the ensuing BOLD response, and extends classical inference to more plausible inferences about the parameters of the model given the data.

315 citations

Journal ArticleDOI
TL;DR: In this paper, the authors present several classes of semiparametric regression models, which extend the existing models in important directions, and construct appropriate likelihood functions involving both finite dimensional and infinite dimensional parameters.
Abstract: Summary. Semiparametric regression models play a central role in formulating the effects of covariates on potentially censored failure times and in the joint modelling of incomplete repeated measures and failure times in longitudinal studies. The presence of infinite dimensional parameters poses considerable theoretical and computational challenges in the statistical analysis of such models. We present several classes of semiparametric regression models, which extend the existing models in important directions. We construct appropriate likelihood functions involving both finite dimensional and infinite dimensional parameters. The maximum likelihood estimators are consistent and asymptotically normal with efficient variances. We develop simple and stable numerical techniques to implement the corresponding inference procedures. Extensive simulation experiments demonstrate that the inferential and computational methods proposed perform well in practical settings. Applications to three medical studies yield important new insights. We conclude that there is no reason, theoretical or numerical, not to use maximum likelihood estimation for semiparametric regression models.We discuss several areas that need further research.

314 citations


Network Information
Related Topics (5)
Estimator
97.3K papers, 2.6M citations
91% related
Deep learning
79.8K papers, 2.1M citations
84% related
Support vector machine
73.6K papers, 1.7M citations
84% related
Cluster analysis
146.5K papers, 2.9M citations
84% related
Artificial neural network
207K papers, 4.5M citations
82% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023114
2022245
2021438
2020410
2019484
2018519