scispace - formally typeset
Open AccessJournal Article

A gentle tutorial of the em algorithm and its application to parameter estimation for Gaussian mixture and hidden Markov models

Reads0
Chats0
TLDR
In this paper, the authors describe the EM algorithm for finding the parameters of a mixture of Gaussian densities and a hidden Markov model (HMM) for both discrete and Gaussian mixture observation models.
Abstract
We describe the maximum-likelihood parameter estimation problem and how the ExpectationMaximization (EM) algorithm can be used for its solution. We first describe the abstract form of the EM algorithm as it is often given in the literature. We then develop the EM parameter estimation procedure for two applications: 1) finding the parameters of a mixture of Gaussian densities, and 2) finding the parameters of a hidden Markov model (HMM) (i.e., the Baum-Welch algorithm) for both discrete and Gaussian mixture observation models. We derive the update equations in fairly explicit detail but we do not prove any convergence properties. We try to emphasize intuition rather than mathematical rigor.

read more

Content maybe subject to copyright    Report

Citations
More filters
Book ChapterDOI

Hybrid Systems Diagnosis

TL;DR: This paper reports on an on-going project to investigate techniques to diagnose complex dynamical systems that are modeled as hybrid systems, and examines continuous systems with embedded supervisory controllers that experience abrupt, partial or full failure of component devices.
Journal ArticleDOI

Kalman filter mixture model for spike sorting of non-stationary data

TL;DR: This paper presents automatic methods for tracking time-varying spike shapes based on a computationally efficient Kalman filter model; the recursive nature of this model allows for on-line implementation of the method.
Proceedings Article

Learning Scale Free Networks by Reweighted L1 regularization

TL;DR: This work replaces the ‘1 regularization with a power law regularization and optimize the objective function by a sequence of iteratively reweighted ‘ 1 regularization problems, where the regularization coecients of nodes with high degree are reduced, encouraging the appearance of hubs with high degrees.
Patent

Techniques for prediction and monitoring of respiration-manifested clinical episodes

TL;DR: In this article, a method for predicting the onset of clinical episodes is described, the method including sensing breathing of a subject, determining at least one breathing pattern of the subject responsively to the sensed breathing, comparing the breathing pattern with a baseline breathing pattern, and predicting onset of the episode at least in part according to the comparison.
References
More filters
Book

The Nature of Statistical Learning Theory

TL;DR: Setting of the learning problem consistency of learning processes bounds on the rate of convergence ofLearning processes controlling the generalization ability of learning process constructing learning algorithms what is important in learning theory?
Book

Matrix computations

Gene H. Golub

Statistical learning theory

TL;DR: Presenting a method for determining the necessary and sufficient conditions for consistency of learning process, the author covers function estimates from small data pools, applying these estimations to real-life problems, and much more.
Book

The Fractal Geometry of Nature

TL;DR: This book is a blend of erudition, popularization, and exposition, and the illustrations include many superb examples of computer graphics that are works of art in their own right.