scispace - formally typeset
Open AccessJournal Article

A gentle tutorial of the em algorithm and its application to parameter estimation for Gaussian mixture and hidden Markov models

Reads0
Chats0
TLDR
In this paper, the authors describe the EM algorithm for finding the parameters of a mixture of Gaussian densities and a hidden Markov model (HMM) for both discrete and Gaussian mixture observation models.
Abstract
We describe the maximum-likelihood parameter estimation problem and how the ExpectationMaximization (EM) algorithm can be used for its solution. We first describe the abstract form of the EM algorithm as it is often given in the literature. We then develop the EM parameter estimation procedure for two applications: 1) finding the parameters of a mixture of Gaussian densities, and 2) finding the parameters of a hidden Markov model (HMM) (i.e., the Baum-Welch algorithm) for both discrete and Gaussian mixture observation models. We derive the update equations in fairly explicit detail but we do not prove any convergence properties. We try to emphasize intuition rather than mathematical rigor.

read more

Content maybe subject to copyright    Report

Citations
More filters

Comparing and Unifying Search-Based and Similarity-Based Approaches to Semi-Supervised Clustering

TL;DR: A unifled approach based on the K-Means clustering algorithm that incorporates both searchbased and similarity-based techniques, and demonstrates that the combined approach generally produces better clusters than either of the individual approaches.
Journal ArticleDOI

Time series cluster kernel for learning similarities between multivariate time series with missing data

TL;DR: The robust time series cluster kernel (TCK) as mentioned in this paper leverages the missing data handling properties of Gaussian mixture models (GMM) augmented with informative prior distributions, and combines the clustering results of many GMM to form the final kernel.
Journal ArticleDOI

Automated oral cancer identification using histopathological images: a hybrid feature extraction paradigm.

TL;DR: A novel integrated index called Oral Malignancy Index (OMI) is proposed using the HOS, LBP, LTE features, to diagnose benign or malignant tissues using just one number to help the oral onco-pathologists to screen the subjects rapidly.
Journal ArticleDOI

A Review on Ultrasound-Based Thyroid Cancer Tissue Characterization and Automated Classification:

TL;DR: This paper discusses the different types of features that are used to study and analyze the differences between benign and malignant thyroid nodules, and presents a brief description of the commonly used classifiers in ultrasound based CAD systems.
Journal ArticleDOI

A review of the Expectation Maximization algorithm in data-driven process identification

TL;DR: A review on applications of the EM algorithm to address missing variables and in ill conditioned problems is provided and future applications of EM algorithm as well as some open problems are provided.
References
More filters
Book

The Nature of Statistical Learning Theory

TL;DR: Setting of the learning problem consistency of learning processes bounds on the rate of convergence ofLearning processes controlling the generalization ability of learning process constructing learning algorithms what is important in learning theory?
Book

Matrix computations

Gene H. Golub

Statistical learning theory

TL;DR: Presenting a method for determining the necessary and sufficient conditions for consistency of learning process, the author covers function estimates from small data pools, applying these estimations to real-life problems, and much more.
Book

The Fractal Geometry of Nature

TL;DR: This book is a blend of erudition, popularization, and exposition, and the illustrations include many superb examples of computer graphics that are works of art in their own right.