scispace - formally typeset
Open AccessJournal Article

A gentle tutorial of the em algorithm and its application to parameter estimation for Gaussian mixture and hidden Markov models

Reads0
Chats0
TLDR
In this paper, the authors describe the EM algorithm for finding the parameters of a mixture of Gaussian densities and a hidden Markov model (HMM) for both discrete and Gaussian mixture observation models.
Abstract
We describe the maximum-likelihood parameter estimation problem and how the ExpectationMaximization (EM) algorithm can be used for its solution. We first describe the abstract form of the EM algorithm as it is often given in the literature. We then develop the EM parameter estimation procedure for two applications: 1) finding the parameters of a mixture of Gaussian densities, and 2) finding the parameters of a hidden Markov model (HMM) (i.e., the Baum-Welch algorithm) for both discrete and Gaussian mixture observation models. We derive the update equations in fairly explicit detail but we do not prove any convergence properties. We try to emphasize intuition rather than mathematical rigor.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

Competitive EM algorithm for finite mixture models

TL;DR: A novel competitive EM algorithm for finite mixture models to overcome the two main drawbacks of the EM algorithm: often getting trapped at local maxima and sometimes converging to the boundary of the parameter space is presented.
Journal ArticleDOI

Real-Time Feedback-Controlled Robotic Fish for Behavioral Experiments With Fish Schools

TL;DR: A new cyber-physical implementation that enables robotic fish to use real-time feedback to control their motion in response to live fish and other environmental features is introduced.
Proceedings ArticleDOI

Closing the learning-planning loop with predictive state representations

TL;DR: A fast and statistically consistent spectral algorithm which learns the parameters of a PSR directly from sequences of action-observation pairs and closes the loop from observations to actions by planning in the learned model and recovering a policy which is near-optimal in the original environment.
Journal ArticleDOI

Expectation-Maximization Binary Clustering for Behavioural Annotation.

TL;DR: This work introduces the Expectation-Maximization binary Clustering (EMbC), a general purpose, unsupervised approach to multivariate data clustering, and focuses on the suitability of the EMbC algorithm for behavioural annotation of movement data.
Book ChapterDOI

The Path Inference Filter: Model-Based Low-Latency Map Matching of Probe Vehicle Data

TL;DR: A new class of algorithms, which are altogether called the path inference filter (PIF), that maps GPS data in real time, for a variety of tradeoffs and scenarios and with a high throughput, is introduced.
References
More filters
Book

The Nature of Statistical Learning Theory

TL;DR: Setting of the learning problem consistency of learning processes bounds on the rate of convergence ofLearning processes controlling the generalization ability of learning process constructing learning algorithms what is important in learning theory?
Book

Matrix computations

Gene H. Golub

Statistical learning theory

TL;DR: Presenting a method for determining the necessary and sufficient conditions for consistency of learning process, the author covers function estimates from small data pools, applying these estimations to real-life problems, and much more.
Book

The Fractal Geometry of Nature

TL;DR: This book is a blend of erudition, popularization, and exposition, and the illustrations include many superb examples of computer graphics that are works of art in their own right.