scispace - formally typeset
Open AccessJournal Article

A gentle tutorial of the em algorithm and its application to parameter estimation for Gaussian mixture and hidden Markov models

Reads0
Chats0
TLDR
In this paper, the authors describe the EM algorithm for finding the parameters of a mixture of Gaussian densities and a hidden Markov model (HMM) for both discrete and Gaussian mixture observation models.
Abstract
We describe the maximum-likelihood parameter estimation problem and how the ExpectationMaximization (EM) algorithm can be used for its solution. We first describe the abstract form of the EM algorithm as it is often given in the literature. We then develop the EM parameter estimation procedure for two applications: 1) finding the parameters of a mixture of Gaussian densities, and 2) finding the parameters of a hidden Markov model (HMM) (i.e., the Baum-Welch algorithm) for both discrete and Gaussian mixture observation models. We derive the update equations in fairly explicit detail but we do not prove any convergence properties. We try to emphasize intuition rather than mathematical rigor.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

Modeling of Multiple Energy Sources for Hybrid Energy Harvesting IoT Systems

TL;DR: Probabilistic energy models for hybrid energy harvesting (HEH) Internet of Things (IoT) nodes are presented and an accurate probabilistic model for each cluster including multiple sources is provided, taking the randomness of ambient energy into account in HEH.
Proceedings ArticleDOI

A similarity measure between unordered vector sets with application to image categorization

TL;DR: A novel approach to compute the similarity between two unordered variable-sized vector sets using the maximum a posteriori (MAP) criterion, which provides a more accurate estimate of the GMM parameters compared to standard maximum likelihood estimation (MLE) in the challenging case where the cardinality of the vector set is small.
Proceedings ArticleDOI

Dictionary Learning from Ambiguously Labeled Data

TL;DR: This work proposes a novel dictionary-based learning method for ambiguously labeled multiclass classification, where each training sample has multiple labels and only one of them is the correct label.
Journal ArticleDOI

Automated Generation of Reduced Stochastic Weather Models I: Simultaneous Dimension and Model Reduction for Time Series Analysis

TL;DR: The approach is based on the combination of hidden Markov models (HMMs) with localized principal component analysis (PCA) and fitting of multidimensional stochastic differential equations (SDEs) for simultaneous dimension reduction, model fitting, and metastability analysis of high-dimensional time series.
Journal ArticleDOI

Two layered mixture Bayesian probabilistic PCA for dynamic process monitoring

TL;DR: Two layer mixture Bayesian probabilistic principal component analyser model is developed and proposed for fault detection and has the potential to provide a parsimonious model and be less susceptible to local optima compared to the existing approaches that build mixture models in a single stage.
References
More filters
Book

The Nature of Statistical Learning Theory

TL;DR: Setting of the learning problem consistency of learning processes bounds on the rate of convergence ofLearning processes controlling the generalization ability of learning process constructing learning algorithms what is important in learning theory?
Book

Matrix computations

Gene H. Golub

Statistical learning theory

TL;DR: Presenting a method for determining the necessary and sufficient conditions for consistency of learning process, the author covers function estimates from small data pools, applying these estimations to real-life problems, and much more.
Book

The Fractal Geometry of Nature

TL;DR: This book is a blend of erudition, popularization, and exposition, and the illustrations include many superb examples of computer graphics that are works of art in their own right.