scispace - formally typeset
Search or ask a question
Topic

Expectation–maximization algorithm

About: Expectation–maximization algorithm is a research topic. Over the lifetime, 11823 publications have been published within this topic receiving 528693 citations. The topic is also known as: EM algorithm & Expectation Maximization.


Papers
More filters
Book
01 Jul 2004
TL;DR: The Item Characteristic Curve: Dichotomous Response Estimating the Parameters of an item characteristic curve Maximum Likelihood Estimation of Examinee Ability Maximum Like likelihood Procedures for Estimating Both Ability and Item Parameters as discussed by the authors.
Abstract: The Item Characteristic Curve: Dichotomous Response Estimating the Parameters of an Item Characteristic Curve Maximum Likelihood Estimation of Examinee Ability Maximum Likelihood Procedures for Estimating Both Ability and Item Parameters The Rasch Model Marginal Maximum Likelihood Estimation and an EM Algorithm Bayesian Parameter Estimation Procedures The Graded Item Response Nominally Scored Items Markov Chain Monte Carlo Methods Parameter Estimation with Multiple Groups Parameter Estimation for a Test with Mixed Item Types

845 citations

Journal ArticleDOI
TL;DR: In this paper, a new computational method for the maximum likelihood solution in factor analysis is presented, which takes into account the fact that the likelihood function may not have a maximum in a point of the parameter space where all unique variances are positive.
Abstract: A new computational method for the maximum likelihood solution in factor analysis is presented. This method takes into account the fact that the likelihood function may not have a maximum in a point of the parameter space where all unique variances are positive. Instead, the maximum may be attained on the boundary of the parameter space where one or more of the unique variances are zero. It is demonstrated that suchimproper (Heywood) solutions occur more often than is usually expected. A general procedure to deal with such improper solutions is proposed. The proposed methods are illustrated using two small sets of empirical data, and results obtained from the analyses of many other sets of data are reported. These analyses verify that the new computational method converges rapidly and that the maximum likelihood solution can be determined very accurately. A by-product obtained by the method is a large sample estimate of the variance-covariance matrix of the estimated unique variances. This can be used to set up approximate confidence intervals for communalities and unique variances.

837 citations

Journal ArticleDOI
TL;DR: In this article, the authors consider maximum likelihood estimation of the parameters of a probability density which is zero for x 2, the information matrix is finite and the classical asymptotic properties continue to hold.
Abstract: SUMMARY We consider maximum likelihood estimation of the parameters of a probability density which is zero for x 2, the information matrix is finite and the classical asymptotic properties continue to hold. For cx = 2 the maximum likelihood estimators are asymptotically efficient and normally distributed, but with a different rate of convergence. For 1 < a < 2, the maximum likelihood estimators exist in general, but are not asymptotically normal, while the question of asymptotic efficiency is still unsolved. For cx < 1, the maximum likelihood estimators may not exist at all, but alternatives are proposed. All these results are already known for the case of a single unknown location parameter 0, but are here extended to the case in which there are additional unknown parameters. The paper concludes with a discussion of the applications in extreme value theory.

826 citations

Journal ArticleDOI
TL;DR: A computationally efficient algorithm for parameter estimation of superimposed signals based on the two-step iterative EM (estimate-and-maximize, with an E step and an M step) algorithm is developed.
Abstract: A computationally efficient algorithm for parameter estimation of superimposed signals based on the two-step iterative EM (estimate-and-maximize, with an E step and an M step) algorithm is developed. The idea is to decompose the observed data into their signal components and then to estimate the parameters of each signal component separately. The algorithm iterates back and forth, using the current parameter estimates to decompose the observed data better and thus increase the likelihood of the next parameter estimates. The application of the algorithm to the multipath time delay and multiple-source location estimation problems is considered. >

814 citations


Network Information
Related Topics (5)
Estimator
97.3K papers, 2.6M citations
91% related
Deep learning
79.8K papers, 2.1M citations
84% related
Support vector machine
73.6K papers, 1.7M citations
84% related
Cluster analysis
146.5K papers, 2.9M citations
84% related
Artificial neural network
207K papers, 4.5M citations
82% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023114
2022245
2021438
2020410
2019484
2018519