scispace - formally typeset
Search or ask a question
Topic

Expectation–maximization algorithm

About: Expectation–maximization algorithm is a research topic. Over the lifetime, 11823 publications have been published within this topic receiving 528693 citations. The topic is also known as: EM algorithm & Expectation Maximization.


Papers
More filters
Journal ArticleDOI
TL;DR: In this paper, the problem of obtaining maximum likelihood estimates for the parameters involved in a stationary single-channel, Markovian queuing process is considered and a method of taking observations is presented which simplifies this problem to that of determining a root of a certain quadratic equation.
Abstract: The problem of obtaining maximum likelihood estimates for the parameters involved in a stationary single-channel, Markovian queuing process is considered. A method of taking observations is presented which simplifies this problem to that of determining a root of a certain quadratic equation. A useful and even simpler rational approximation is also studied.

130 citations

Journal ArticleDOI
TL;DR: Adopting the expectation-maximization (EM) algorithm for use in computing the maximum a posteriori (MAP) estimate corresponding to the model, it is found that the model permits remarkably simple, closed-form expressions for the EM update equations.
Abstract: This paper describes a statistical multiscale modeling and analysis framework for linear inverse problems involving Poisson data The framework itself is founded upon a multiscale analysis associated with recursive partitioning of the underlying intensity, a corresponding multiscale factorization of the likelihood (induced by this analysis), and a choice of prior probability distribution made to match this factorization by modeling the "splits" in the underlying partition The class of priors used here has the interesting feature that the "noninformative" member yields the traditional maximum-likelihood solution; other choices are made to reflect prior belief as to the smoothness of the unknown intensity Adopting the expectation-maximization (EM) algorithm for use in computing the maximum a posteriori (MAP) estimate corresponding to our model, we find that our model permits remarkably simple, closed-form expressions for the EM update equations The behavior of our EM algorithm is examined, and it is shown that convergence to the global MAP estimate can be guaranteed Applications in emission computed tomography and astronomical energy spectral analysis demonstrate the potential of the new approach

130 citations

Journal ArticleDOI
TL;DR: In this paper, a homogeneous spatial point pattern is regarded as one of thermal equilibrium configurations whose points interact on each other through a certain pairwise potential, and the likelihood is defined by the Gibbs canonical ensemble.
Abstract: A homogeneous spatial point pattern is regarded as one of thermal equilibrium configurations whose points interact on each other through a certain pairwise potential. Parameterizing the potential function, the likelihood is then defined by the Gibbs canonical ensemble. A Monte Carlo simulation method is reviewed to obtain equilibrium point patterns which correspond to a given potential function. An approximate log likelihood function for gas-like patterns is derived in order to compute the maximum likelihood estimates efficiently. Some parametric potential functions are suggested, and the Akaike Information Criterion is used for model selection. The feasibility of our procedure is demonstrated by some computer experiments. Using the procedure, some real data are investigated.

130 citations

Journal ArticleDOI
TL;DR: An online version of the expectation-maximization (EM) algorithm for hidden Markov models (HMMs) is presented, generalized to the case where the model parameters can change with time by introducing a discount factor into the recurrence relations.
Abstract: We present an online version of the expectation-maximization (EM) algorithm for hidden Markov models (HMMs). The sufficient statistics required for parameters estimation is computed recursively with time, that is, in an online way instead of using the batch forward-backward procedure. This computational scheme is generalized to the case where the model parameters can change with time by introducing a discount factor into the recurrence relations. The resulting algorithm is equivalent to the batch EM algorithm, for appropriate discount factor and scheduling of parameters update. On the other hand, the online algorithm is able to deal with dynamic environments, i.e., when the statistics of the observed data is changing with time. The implications of the online algorithm for probabilistic modeling in neuroscience are briefly discussed.

130 citations

Journal ArticleDOI
TL;DR: A method for estimating parameters for general parametric regression models with an arbitrary number of missing covariates by adapting a Monte Carlo version of the EM algorithm and model the marginal distribution of the covariates as a product of one‐dimensional conditional distributions.
Abstract: We propose a method for estimating parameters for general parametric regression models with an arbitrary number of missing covariates. We allow any pattern of missing data and assume that the missing data mechanism is ignorable throughout. When the missing covariates are categorical, a useful technique for obtaining parameter estimates is the EM algorithm by the method of weights proposed in Ibrahim (1990, Journal of the American Statistical Association85, 765–769). We extend this method t o continuous or mixed categorical and continuous covariates, and for arbitrary parametric regression models, by adapting a Monte Carlo version of the EM algorithm as discussed by Wei and Tanner (1990, Journal of the American Statistical Association85, 699–704). In addition, we discuss the Gibbs sampler for sampling from the conditional distribution of the missing covariates given the observed data and show that the appropriate complete conditionals are log‐concave. The log‐concavity property of the conditional distributions will facilitate a straightforward implementation of the Gibbs sampler via the adaptive rejection algorithm of Gilks and Wild (1992, Applied Statistics41, 337–348). We assume the model for the response given the covariates is an arbitrary parametric regression model, such as a generalized linear model, a parametric survival model, or a nonlinear model. We model the marginal distribution of the covariates as a product of one‐dimensional conditional distributions. This allows us a great deal of flexibility in modeling the distribution of the covariates and reduces the number of nuisance parameters that are introduced in the E‐step. We present examples involving both simulated and real data.

130 citations


Network Information
Related Topics (5)
Estimator
97.3K papers, 2.6M citations
91% related
Deep learning
79.8K papers, 2.1M citations
84% related
Support vector machine
73.6K papers, 1.7M citations
84% related
Cluster analysis
146.5K papers, 2.9M citations
84% related
Artificial neural network
207K papers, 4.5M citations
82% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023114
2022245
2021438
2020410
2019484
2018519