scispace - formally typeset
Search or ask a question
Topic

Expectation–maximization algorithm

About: Expectation–maximization algorithm is a research topic. Over the lifetime, 11823 publications have been published within this topic receiving 528693 citations. The topic is also known as: EM algorithm & Expectation Maximization.


Papers
More filters
Journal ArticleDOI
TL;DR: An EM algorithm for maximum likelihood estimation in generalized linear models with overdispersion is presented, initially derived as a form of Gaussian quadrature assuming a normal mixing distribution, giving a straightforward method for the fully non-parametric ML estimation of this distribution.
Abstract: This paper presents an EM algorithm for maximum likelihood estimation in generalized linear models with overdispersion The algorithm is initially derived as a form of Gaussian quadrature assuming a normal mixing distribution, but with only slight variation it can be used for a completely unknown mixing distribution, giving a straightforward method for the fully non-parametric ML estimation of this distribution This is of value because the ML estimates of the GLM parameters may be sensitive to the specification of a parametric form for the mixing distribution A listing of a GLIM4 algorithm for fitting the overdispersed binomial logit model is given in an appendix

202 citations

Journal ArticleDOI
TL;DR: A new generalized expectation maximization (GEM) algorithm, where the missing variables are the scale factors of the GSM densities, and the maximization step of the underlying expectation maximizations algorithm is replaced with a linear stationary second-order iterative method.
Abstract: Image deconvolution is formulated in the wavelet domain under the Bayesian framework. The well-known sparsity of the wavelet coefficients of real-world images is modeled by heavy-tailed priors belonging to the Gaussian scale mixture (GSM) class; i.e., priors given by a linear (finite of infinite) combination of Gaussian densities. This class includes, among others, the generalized Gaussian, the Jeffreys , and the Gaussian mixture priors. Necessary and sufficient conditions are stated under which the prior induced by a thresholding/shrinking denoising rule is a GSM. This result is then used to show that the prior induced by the "nonnegative garrote" thresholding/shrinking rule, herein termed the garrote prior, is a GSM. To compute the maximum a posteriori estimate, we propose a new generalized expectation maximization (GEM) algorithm, where the missing variables are the scale factors of the GSM densities. The maximization step of the underlying expectation maximization algorithm is replaced with a linear stationary second-order iterative method. The result is a GEM algorithm of O(NlogN) computational complexity. In a series of benchmark tests, the proposed approach outperforms or performs similarly to state-of-the art methods, demanding comparable (in some cases, much less) computational complexity.

201 citations

Journal ArticleDOI
TL;DR: In this article, a unified approach to selecting a bandwidth and constructing confidence intervals in local maximum likelihood estimation is presented, which is then applied to least squares nonparametric regression and to non-parametric logistic regression.
Abstract: Local maximum likelihood estimation is a nonparametric counterpart of the widely used parametric maximum likelihood technique. It extends the scope of the parametric maximum likelihood method to a much wider class of parametric spaces. Associated with this nonparametric estimation scheme is the issue of bandwidth selection and bias and variance assessment. This paper provides a unified approach to selecting a bandwidth and constructing confidence intervals in local maximum likelihood estimation. The approach is then applied to least squares nonparametric regression and to nonparametric logistic regression. Our experiences in these two settings show that the general idea outlined here is powerful and encouraging.

201 citations

Journal ArticleDOI
TL;DR: In this article, an account is given of the method of extended maximum likelihood, which differs from the standard method of maximum likelihood in that the normalisation of the probability distribution function is allowed to vary.
Abstract: An account is given of the method of extended maximum likelihood. This differs from the standard method of maximum likelihood in that the normalisation of the probability distribution function is allowed to vary. It is thus applicable to problems in which the number of samples obtained is itself a relevant measurement. If the function is such that its size and shape can be independently varied, then the estimates given by the extended method are identical to the standard maximum likelihood estimators, though the errors require care of interpretation. If the function does not have this property, then extended maximum likelihood can give better results.

201 citations

Journal ArticleDOI
TL;DR: The hybrid algorithm uses a composite algorithmic mapping combining the expectation-maximization algorithm and the (modified) iterative convex minorant algorithm to estimate nonparametric maximum likelihood estimation from censored data when the log-likelihood is concave.
Abstract: We present a hybrid algorithm for nonparametric maximum likelihood estimation from censored data when the log-likelihood is concave. The hybrid algorithm uses a composite algorithmic mapping combining the expectation-maximization (EM) algorithm and the (modified) iterative convex minorant (ICM) algorithm. Global convergence of the hybrid algorithm is proven; the iterates generated by the hybrid algorithm are shown to converge to the nonparametric maximum likelihood estimator (NPMLE) unambiguously. Numerical simulations demonstrate that the hybrid algorithm converges more rapidly than either of the EM or the naive ICM algorithm for doubly censored data. The speed of the hybrid algorithm makes it possible to accompany the NPMLE with bootstrap confidence bands.

199 citations


Network Information
Related Topics (5)
Estimator
97.3K papers, 2.6M citations
91% related
Deep learning
79.8K papers, 2.1M citations
84% related
Support vector machine
73.6K papers, 1.7M citations
84% related
Cluster analysis
146.5K papers, 2.9M citations
84% related
Artificial neural network
207K papers, 4.5M citations
82% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023114
2022245
2021438
2020410
2019484
2018519