scispace - formally typeset
Search or ask a question
Topic

Expectation–maximization algorithm

About: Expectation–maximization algorithm is a research topic. Over the lifetime, 11823 publications have been published within this topic receiving 528693 citations. The topic is also known as: EM algorithm & Expectation Maximization.


Papers
More filters
Journal ArticleDOI
TL;DR: In this paper, the probit-normal model for binary data is extended to allow correlated random effects, and the EM algorithm with its M-step greatly simplified under the assumption of a probit link and its E-step made feasible by Gibbs sampling.
Abstract: The probit-normal model for binary data (McCulloch, 1994, Journal of the American Statistical Association 89, 330-335) is extended to allow correlated random effects. To obtain maximum likelihood estimates, we use the EM algorithm with its M-step greatly simplified under the assumption of a probit link and its E-step made feasible by Gibbs sampling. Standard errors are calculated by inverting a Monte Carlo approximation of the information matrix rather than via the SEM algorithm. A method is also suggested that accounts for the Monte Carlo variation explicitly. As an illustration, we present a new analysis of the famous salamander mating data. Unlike previous analyses, we find it necessary to introduce different variance components for different species of animals. Finally, we consider models with correlated errors as well as correlated random effects.

91 citations

Journal ArticleDOI
TL;DR: Comparisons of the a priori uniform and nonuniform Bayesian algorithms to the maximum-likelihood algorithm are carried out using computer-generated noise-free and Poisson randomized projections.
Abstract: A method that incorporates a priori uniform or nonuniform source distribution probabilistic information and data fluctuations of a Poisson nature is presented. The source distributions are modeled in terms of a priori source probability density functions. Maximum a posteriori probability solutions, as determined by a system of equations, are given. Interactive Bayesian imaging algorithms for the solutions are derived using an expectation maximization technique. Comparisons of the a priori uniform and nonuniform Bayesian algorithms to the maximum-likelihood algorithm are carried out using computer-generated noise-free and Poisson randomized projections. Improvement in image reconstruction from projections with the Bayesian algorithm is demonstrated. Superior results are obtained using the a priori nonuniform source distribution. >

91 citations

Proceedings ArticleDOI
20 Sep 1999
TL;DR: This paper proposes a method called transformed component analysis, which incorporates a discrete, hidden variable that accounts for transformations and uses the expectation maximization algorithm to jointly extract components and normalize for transformations.
Abstract: A simple, effective way to model images is to represent each input pattern by a linear combination of "component" vectors, where the amplitudes of the vectors are modulated to match the input. This approach includes principal component analysis, independent component analysis and factor analysis. In practice, images are subjected to randomly selected transformations of a known nature, such as translation and rotation. Direct use of the above methods will lead to severely blurred components that tend to ignore the more interesting and useful structure. In previous work, we introduced a clustering algorithm that is invariant to transformations. In this paper, we propose a method called transformed component analysis, which incorporates a discrete, hidden variable that accounts for transformations and uses the expectation maximization algorithm to jointly extract components and normalize for transformations. We illustrate the algorithm using a shading problem, facial expression modeling and written digit recognition.

91 citations

Journal ArticleDOI
TL;DR: In this article, the estimation of mixing proportions in the mixture model is discussed, with emphasis on the mixture of two normal components with all five parameters unknown, and simulations are presented that compare minimum distance (MD) and maximum likelihood (ML) estimation of the parameters of this mixture-of-normals model.
Abstract: The estimation of mixing proportions in the mixture model is discussed, with emphasis on the mixture of two normal components with all five parameters unknown. Simulations are presented that compare minimum distance (MD) and maximum likelihood (ML) estimation of the parameters of this mixture-of-normals model. Some practical issues of implementation of these results are also discussed. Simulation results indicate that ML techniques are superior to MD when component distributions actually are normal, but MD techniques provide better estimates than ML under symmetric departures from component normality. Interestingly, an ad hoc starting value for the iterative procedures occasionally outperformed both the ML and MD techniques. Results are presented that establish strong consistency and asymptotic normality of the MD estimator under conditions that include the mixture-of-normals model. Asymptotic variances and relative efficiencies are obtained for further comparison of the MD and ML estimators.

91 citations

Journal ArticleDOI
TL;DR: In this article, the exact finite-sample distribution of the limited-information maximum likelihood estimator when the structural equation being estimated contains two endogenous variables and is identifiable in a complete system of linear stochastic equations is derived.
Abstract: This article is concerned with the exact finite-sample distribution of the limited-information maximum likelihood estimator when the structural equation being estimated contains two endogenous variables and is identifiable in a complete system of linear stochastic equations. The density function derived, which is represented as a doubly infinite series of a complicated form, reveals the important fact that for arbitrary values of the parameters in the model, the LIML estimator does not possess moments of order greater than or equal to one

91 citations


Network Information
Related Topics (5)
Estimator
97.3K papers, 2.6M citations
91% related
Deep learning
79.8K papers, 2.1M citations
84% related
Support vector machine
73.6K papers, 1.7M citations
84% related
Cluster analysis
146.5K papers, 2.9M citations
84% related
Artificial neural network
207K papers, 4.5M citations
82% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023114
2022245
2021438
2020410
2019484
2018519