scispace - formally typeset
Open AccessJournal ArticleDOI

Online EM Algorithm for Hidden Markov Models

Olivier Cappé
- 01 Jan 2011 - 
- Vol. 20, Iss: 3, pp 728-749
TLDR
In this article, the authors proposed an online parameter estimation algorithm that combines two key ideas: reparameterizing the problem using complete-data sufficient statistics and exploiting a purely recursive form of smoothing in HMMs based on an auxiliary recursion.
Abstract
Online (also called “recursive” or “adaptive”) estimation of fixed model parameters in hidden Markov models is a topic of much interest in times series modeling. In this work, we propose an online parameter estimation algorithm that combines two key ideas. The first one, which is deeply rooted in the Expectation-Maximization (EM) methodology, consists in reparameterizing the problem using complete-data sufficient statistics. The second ingredient consists in exploiting a purely recursive form of smoothing in HMMs based on an auxiliary recursion. Although the proposed online EM algorithm resembles a classical stochastic approximation (or Robbins–Monro) algorithm, it is sufficiently different to resist conventional analysis of convergence. We thus provide limited results which identify the potential limiting points of the recursion as well as the large-sample behavior of the quantities involved in the algorithm. The performance of the proposed algorithm is numerically evaluated through simulations in the ca...

read more

Citations
More filters
Journal ArticleDOI

On Particle Methods for Parameter Estimation in State-Space Models

TL;DR: A comprehensive review of particle methods that have been proposed to perform static parameter estimation in state-space models is presented in this article, where the advantages and limitations of these methods are discussed.
Journal ArticleDOI

Online learning with hidden markov models

TL;DR: An online version of the expectation-maximization (EM) algorithm for hidden Markov models (HMMs) is presented, generalized to the case where the model parameters can change with time by introducing a discount factor into the recurrence relations.
Journal ArticleDOI

A survey of techniques for incremental learning of HMM parameters

TL;DR: This paper underscores the need for empirical benchmarking studies among techniques presented in literature, and proposes several evaluation criteria based on non-parametric statistical testing to facilitate the selection of techniques given a particular application domain.
Posted Content

Forward Smoothing Using Sequential Monte Carlo

TL;DR: This work proposes a new SMC algorithm to compute the expectation of additive functionals recursively and shows how this allows to perform recursive parameter estimation using an SMC implementation of an on-line version of the Expectation-Maximization algorithm which does not suffer from the particle path degeneracy problem.
Journal ArticleDOI

Distributed Maximum Likelihood for Simultaneous Self-Localization and Tracking in Sensor Networks

TL;DR: It is shown that the sensor self-localization problem can be cast as a static parameter estimation problem for Hidden Markov Models and fully decentralized versions of the Recursive Maximum Likelihood and on-line Expectation-Maximization algorithms to localize the sensor network simultaneously with target tracking are implemented.
References
More filters
Journal ArticleDOI

A tutorial on hidden Markov models and selected applications in speech recognition

TL;DR: In this paper, the authors provide an overview of the basic theory of hidden Markov models (HMMs) as originated by L.E. Baum and T. Petrie (1966) and give practical details on methods of implementation of the theory along with a description of selected applications of HMMs to distinct problems in speech recognition.
Book ChapterDOI

A view of the EM algorithm that justifies incremental, sparse, and other variants

TL;DR: In this paper, an incremental variant of the EM algorithm is proposed, in which the distribution for only one of the unobserved variables is recalculated in each E step, which is shown empirically to give faster convergence in a mixture estimation problem.
Related Papers (5)