scispace - formally typeset
Journal ArticleDOI

Acceleration of the EM Algorithm by using Quasi‐Newton Methods

Reads0
Chats0
TLDR
It is proposed to approximate the inverse of the observed information matrix by using auxiliary output from the new hybrid accelerator and a numerical evaluation of these approximations indicates that they may be useful at least for exploratory purposes.
Abstract
The EM algorithm is a popular method for maximum likelihood estimation. Its simplicity in many applications and desirable convergence properties make it very attractive. Its sometimes slow convergence, however, has prompted researchers to propose methods to accelerate it. We review these methods, classifying them into three groups: pure, hybrid and EM-type accelerators. We propose a new pure and a new hybrid accelerator both based on quasi-Newton methods and numerically compare these and two other quasi-Newton accelerators. For this we use examples in each of three areas: Poisson mixtures, the estimation of covariance from incomplete data and multivariate normal mixtures. In these comparisons, the new hybrid accelerator was fastest on most of the examples and often dramatically so. In some cases it accelerated the EM algorithm by factors of over 100. The new pure accelerator is very simple to implement and competed well with the other accelerators. It accelerated the EM algorithm in some cases by factors of over 50. To obtain standard errors, we propose to approximate the inverse of the observed information matrix by using auxiliary output from the new hybrid accelerator. A numerical evaluation of these approximations indicates that they may be useful at least for exploratory purposes.

read more

Citations
More filters
Journal ArticleDOI

A Tutorial on MM Algorithms

TL;DR: The principle behind MM algorithms is explained, some methods for constructing them are suggested, and some of their attractive features are discussed.
Book

Inference in Hidden Markov Models

TL;DR: This book is a comprehensive treatment of inference for hidden Markov models, including both algorithms and statistical theory, and builds on recent developments to present a self-contained view.
Journal ArticleDOI

Majorization-Minimization Algorithms in Signal Processing, Communications, and Machine Learning

TL;DR: An overview of the majorization-minimization (MM) algorithmic framework, which can provide guidance in deriving problem-driven algorithms with low computational cost and is elaborated by a wide range of applications in signal processing, communications, and machine learning.
Journal ArticleDOI

Optimization Transfer Using Surrogate Objective Functions

TL;DR: Because optimization transfer algorithms often exhibit the slow convergence of EM algorithms, two methods of accelerating optimization transfer are discussed and evaluated in the context of specific problems.
References
More filters
Book

Practical Methods of Optimization

TL;DR: The aim of this book is to provide a Discussion of Constrained Optimization and its Applications to Linear Programming and Other Optimization Problems.
Book

Numerical Methods for Unconstrained Optimization and Nonlinear Equations (Classics in Applied Mathematics, 16)

TL;DR: In this paper, Schnabel proposed a modular system of algorithms for unconstrained minimization and nonlinear equations, based on Newton's method for solving one equation in one unknown convergence of sequences of real numbers.
Book

Numerical methods for unconstrained optimization and nonlinear equations

TL;DR: Newton's Method for Nonlinear Equations and Unconstrained Minimization and methods for solving nonlinear least-squares problems with Special Structure.