scispace - formally typeset
Search or ask a question
Topic

Expectation–maximization algorithm

About: Expectation–maximization algorithm is a research topic. Over the lifetime, 11823 publications have been published within this topic receiving 528693 citations. The topic is also known as: EM algorithm & Expectation Maximization.


Papers
More filters
Posted Content
TL;DR: In this paper, an approximate Bayesian posterior inference algorithm for stochastic gradient descent with constant SGD was proposed, where the tuning parameters of SGD were adjusted to best match the stationary distribution to a posterior, minimizing the Kullback-Leibler divergence.
Abstract: Stochastic Gradient Descent with a constant learning rate (constant SGD) simulates a Markov chain with a stationary distribution. With this perspective, we derive several new results. (1) We show that constant SGD can be used as an approximate Bayesian posterior inference algorithm. Specifically, we show how to adjust the tuning parameters of constant SGD to best match the stationary distribution to a posterior, minimizing the Kullback-Leibler divergence between these two distributions. (2) We demonstrate that constant SGD gives rise to a new variational EM algorithm that optimizes hyperparameters in complex probabilistic models. (3) We also propose SGD with momentum for sampling and show how to adjust the damping coefficient accordingly. (4) We analyze MCMC algorithms. For Langevin Dynamics and Stochastic Gradient Fisher Scoring, we quantify the approximation errors due to finite learning rates. Finally (5), we use the stochastic process perspective to give a short proof of why Polyak averaging is optimal. Based on this idea, we propose a scalable approximate MCMC algorithm, the Averaged Stochastic Gradient Sampler.

272 citations

Journal ArticleDOI
TL;DR: In this paper, the numerical technique of the maximum likelihood method to estimate the parameters of Gamma distribution is examined and the bias of the estimates is investigated numerically, the empirical result indicates that the bias bias of both parameter estimates produced by the maximum-likelihood method is positive.
Abstract: The numerical technique of the maximum likelihood method to estimate the parameters of Gamma distribution is examined. A convenient table is obtained to facilitate the maximum likelihood estimation of the parameters and the estimates of the variance-covariance matrix. The bias of the estimates is investigated numerically. The empirical result indicates that the bias of both parameter estimates produced by the maximum likelihood method is positive.

271 citations

Journal ArticleDOI
TL;DR: An online (recursive) algorithm is proposed that estimates the parameters of the mixture and that simultaneously selects the number of components to search for the maximum a posteriori (MAP) solution and to discard the irrelevant components.
Abstract: There are two open problems when finite mixture densities are used to model multivariate data: the selection of the number of components and the initialization. In this paper, we propose an online (recursive) algorithm that estimates the parameters of the mixture and that simultaneously selects the number of components. The new algorithm starts with a large number of randomly initialized components. A prior is used as a bias for maximally structured models. A stochastic approximation recursive learning algorithm is proposed to search for the maximum a posteriori (MAP) solution and to discard the irrelevant components.

269 citations

Book
30 Mar 1993
TL;DR: Observed data techniques - normal approximation observed data techniques the EM algorithm data augmentation the Gibbs sampler.
Abstract: Observed data techniques - normal approximation observed data techniques the EM algorithm data augmentation the Gibbs sampler

267 citations

Journal ArticleDOI
TL;DR: The extension to multivariate analyses, allowing for missing records, is described, a numerical example is given and simplifications for specific models are discussed.
Abstract: Summary — Restricted maximum likelihood estimates of variance and covariance components can be obtained by direct maximization of the associated likelihood using standard, derivative-free optimization procedures. In general, this requires a multi-dimensional search and numerous evaluations of the (log) likelihood function. Use of this approach for analyses under an animal model has been described for the univariate case. This model includes animals’ additive genetic merit as random effect and accounts for all relationships between animals. In addition, other random factors such as common environmental or maternal genetic effects can be fitted. This paper describes the extension to multivariate analyses, allowing for missing records. A numerical example is given and simplifications for specific models are discussed. variance component / restricted maximum likelihood / animal model / additional random effect / derivative-free approach / multivariate analysis

267 citations


Network Information
Related Topics (5)
Estimator
97.3K papers, 2.6M citations
91% related
Deep learning
79.8K papers, 2.1M citations
84% related
Support vector machine
73.6K papers, 1.7M citations
84% related
Cluster analysis
146.5K papers, 2.9M citations
84% related
Artificial neural network
207K papers, 4.5M citations
82% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023114
2022245
2021438
2020410
2019484
2018519