On the global and componentwise rates of convergence of the EM algorithm
Xiao-Li Meng,Donald B. Rubin +1 more
Reads0
Chats0
TLDR
This article presents a general description on how and when the componentwise rates differ, as well as their relationships with the global rate, and provides an example, a standard contaminated normal model, to show that such phenomena are not necessarily pathological, but can occur in useful statistical models.About:
This article is published in Linear Algebra and its Applications.The article was published on 1994-03-01 and is currently open access. It has received 87 citations till now. The article focuses on the topics: Rate of convergence & Expectation–maximization algorithm.read more
Citations
More filters
Journal ArticleDOI
The Art of Data Augmentation
David A. van Dyk,Xiao-Li Meng +1 more
TL;DR: An effective search strategy is introduced that combines the ideas of marginal augmentation and conditional augmentation, together with a deterministic approximation method for selecting good augmentation schemes to obtain efficient Markov chain Monte Carlo algorithms for posterior sampling.
Journal ArticleDOI
The EM Algorithm—an Old Folk‐song Sung to a Fast New Tune
Xiao-Li Meng,David A. van Dyk +1 more
TL;DR: A general alternating expectation–conditional maximization algorithm AECM is formulated that couples flexible data augmentation schemes with model reduction schemes to achieve efficient computations and shows the potential for a dramatic reduction in computational time with little increase in human effort.
Journal ArticleDOI
The ECME algorithm: A simple extension of EM and ECM with faster monotone convergence
Chuanhai Liu,Donald B. Rubin +1 more
TL;DR: ECME as discussed by the authors is a generalization of the ECM algorithm, which is itself an extension of the EM algorithm (Dempster, Laird & Rubin, 1977), which can be obtained by replacing some CM-steps of ECM, which maximise the constrained expected complete-data loglikelihood function, with steps that maximize the correspondingly constrained actual likelihood function.
Journal ArticleDOI
Statistical guarantees for the EM algorithm: From population to sample-based analysis
TL;DR: A general framework for proving rigorous guarantees on the performance of the EM algorithm and a variant known as gradient EM and consequences of the general theory for three canonical examples of incomplete-data problems are developed.
References
More filters
Book
Statistical Analysis with Missing Data
TL;DR: This work states that maximum Likelihood for General Patterns of Missing Data: Introduction and Theory with Ignorable Nonresponse and large-Sample Inference Based on Maximum Likelihood Estimates is likely to be high.
Journal ArticleDOI
On the convergence properties of the em algorithm
TL;DR: In this paper, the EM algorithm converges to a local maximum or a stationary value of the (incomplete-data) likelihood function under conditions that are applicable to many practical situations.
Journal ArticleDOI
Maximum likelihood estimation via the ECM algorithm: A general framework
Xiao-Li Meng,Donald B. Rubin +1 more
TL;DR: In many cases, complete-data maximum likelihood estimation is relatively simple when conditional on some function of the parameters being estimated as mentioned in this paper, and convergence is stable, with each iteration increasing the likelihood.
Journal ArticleDOI