scispace - formally typeset
Open AccessJournal Article

A gentle tutorial of the em algorithm and its application to parameter estimation for Gaussian mixture and hidden Markov models

Reads0
Chats0
TLDR
In this paper, the authors describe the EM algorithm for finding the parameters of a mixture of Gaussian densities and a hidden Markov model (HMM) for both discrete and Gaussian mixture observation models.
Abstract
We describe the maximum-likelihood parameter estimation problem and how the ExpectationMaximization (EM) algorithm can be used for its solution. We first describe the abstract form of the EM algorithm as it is often given in the literature. We then develop the EM parameter estimation procedure for two applications: 1) finding the parameters of a mixture of Gaussian densities, and 2) finding the parameters of a hidden Markov model (HMM) (i.e., the Baum-Welch algorithm) for both discrete and Gaussian mixture observation models. We derive the update equations in fairly explicit detail but we do not prove any convergence properties. We try to emphasize intuition rather than mathematical rigor.

read more

Content maybe subject to copyright    Report

Citations
More filters
Book ChapterDOI

Speaker identification based on log area ratio and Gaussian mixture models in narrow-band speech: speech understanding/interaction

TL;DR: An F-ratio feature analysis was conducted on both the LAR and MFCC feature vectors which showed the lower order LAR coefficients are superior to MFCC counterpart, and the text- independent, closed-set speaker identification rate was improved.
Book ChapterDOI

Validation of Tissue Modelization and Classification Techniques in T1-Weighted MR Brain Images

TL;DR: Results demonstrate that methods relying in both intensity and spatial information are in general more robust to noise and inhomogeneities than traditional tissue modelization and classification techniques.

Multi-stream Processing for Noise Robust Speech Recognition

Hemant Misra
TL;DR: Several weighting strategies are investigated in this thesis to merge the posterior outputs of multi-layered perceptrons (MLPs) trained on different feature representations and the relationship of oracle selection to inverse entropy weighting is studied.
Proceedings ArticleDOI

A constrained baum-welch algorithm for improved phoneme segmentation and efficient training.

TL;DR: An extension to the Baum-Welch algorithm for training Hidden Markov Models that uses explicit phoneme segmentation to constrain the forward and backward lattice is described.
Journal ArticleDOI

The Stochastic Topic Block Model for the Clustering of Vertices in Networks with Textual Edges

TL;DR: In this article, the stochastic topic block model (STBM) is proposed to discover meaningful clusters of vertices that are coherent from both the network interactions and the text contents.
References
More filters
Book

The Nature of Statistical Learning Theory

TL;DR: Setting of the learning problem consistency of learning processes bounds on the rate of convergence ofLearning processes controlling the generalization ability of learning process constructing learning algorithms what is important in learning theory?
Book

Matrix computations

Gene H. Golub

Statistical learning theory

TL;DR: Presenting a method for determining the necessary and sufficient conditions for consistency of learning process, the author covers function estimates from small data pools, applying these estimations to real-life problems, and much more.
Book

The Fractal Geometry of Nature

TL;DR: This book is a blend of erudition, popularization, and exposition, and the illustrations include many superb examples of computer graphics that are works of art in their own right.