scispace - formally typeset
Open AccessJournal ArticleDOI

A gradient algorithm locally equivalent to the EM algorithm

Kenneth Lange
- 01 Jul 1995 - 
- Vol. 57, Iss: 2, pp 425-437
TLDR
This EM gradient algorithm approximately solves the M-step of the EM algorithm by one iteration of Newton's method, and the proof of global convergence applies and improves existing theory for the EM algorithms.
Abstract
In many problems of maximum likelihood estimation, it is impossible to carry out either the E-step or the M-step of the EM algorithm. The present paper introduces a gradient algorithm that is closely related to the EM algorithm. This EM gradient algorithm approximately solves the M-step of the EM algorithm by one iteration of Newton's method. Since Newton's method converges quickly, the local properties of the EM gradient algorithm are almost identical with those of the EM algorithm. Any strict local maximum point of the observed likelihood locally attracts the EM and EM gradient algorithm at the same rate of convergence, and near the maximum point the EM gradient algorithm always produces an increase in the likelihood. With proper modification the EM gradient algorithm also exhibits global convergence properties that are similar to those of the EM algorithm. Our proof of global convergence applies and improves existing theory for the EM algorithm. These theoretical points are reinforced by a discussion of three realistic examples illustrating how the EM gradient algorithm can succeed where the EM algorithm is intractable

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

A Tutorial on MM Algorithms

TL;DR: The principle behind MM algorithms is explained, some methods for constructing them are suggested, and some of their attractive features are discussed.
Book

Inference in Hidden Markov Models

TL;DR: This book is a comprehensive treatment of inference for hidden Markov models, including both algorithms and statistical theory, and builds on recent developments to present a self-contained view.
Journal ArticleDOI

One-step Sparse Estimates in Nonconcave Penalized Likelihood Models.

TL;DR: A new unified algorithm based on the local linear approximation for maximizing the penalized likelihood for a broad class of concave penalty functions and shows that if the regularization parameter is appropriately chosen, the one-step LLA estimates enjoy the oracle properties with good initial estimators.
Journal ArticleDOI

Majorization-Minimization Algorithms in Signal Processing, Communications, and Machine Learning

TL;DR: An overview of the majorization-minimization (MM) algorithmic framework, which can provide guidance in deriving problem-driven algorithms with low computational cost and is elaborated by a wide range of applications in signal processing, communications, and machine learning.
Book

Principles of Data Mining

TL;DR: The book consists of three sections and provides a tutorial overview of the principles underlying data mining algorithms and their application, and shows how all of the preceding analysis fits together when applied to real-world data mining problems.
References
More filters
Book

Statistical Analysis with Missing Data

TL;DR: This work states that maximum Likelihood for General Patterns of Missing Data: Introduction and Theory with Ignorable Nonresponse and large-Sample Inference Based on Maximum Likelihood Estimates is likely to be high.
Book

Statistical analysis of finite mixture distributions

TL;DR: This course discusses Mathematical Aspects of Mixtures, Sequential Problems and Procedures, and Applications of Finite Mixture Models.
Journal ArticleDOI

On the convergence properties of the em algorithm

C. F. Jeff Wu
- 01 Mar 1983 - 
TL;DR: In this paper, the EM algorithm converges to a local maximum or a stationary value of the (incomplete-data) likelihood function under conditions that are applicable to many practical situations.
Journal ArticleDOI

Mixture densities, maximum likelihood, and the EM algorithm

Richard A. Redner, +1 more
- 01 Apr 1984 - 
TL;DR: This work discusses the formulation and theoretical and practical properties of the EM algorithm, a specialization to the mixture density context of a general algorithm used to approximate maximum-likelihood estimates for incomplete data problems.
Journal Article

EM reconstruction algorithms for emission and transmission tomography.

TL;DR: The general principles behind all EM algorithms are discussed and in detail the specific algorithms for emission and transmission tomography are derived and the specification of necessary physical features such as source and detector geometries are discussed.