scispace - formally typeset
Open AccessJournal ArticleDOI

On‐line expectation–maximization algorithm for latent data models

Reads0
Chats0
TLDR
A generic on‐line version of the expectation–maximization (EM) algorithm applicable to latent variable models of independent observations that is suitable for conditional models, as illustrated in the case of the mixture of linear regressions model.
Abstract
In this contribution, we propose a generic online (also sometimes called adaptive or recursive) version of the Expectation-Maximisation (EM) algorithm applicable to latent variable models of independent observations. Compared to the algorithm of Titterington (1984), this approach is more directly connected to the usual EM algorithm and does not rely on integration with respect to the complete data distribution. The resulting algorithm is usually simpler and is shown to achieve convergence to the stationary points of the Kullback-Leibler divergence between the marginal distribution of the observation and the model distribution at the optimal rate, i.e., that of the maximum likelihood estimator. In addition, the proposed approach is also suitable for conditional (or regression) models, as illustrated in the case of the mixture of linear regressions model.

read more

Citations
More filters
Journal ArticleDOI

Stochastic variational inference

TL;DR: Stochastic variational inference lets us apply complex Bayesian models to massive data sets, and it is shown that the Bayesian nonparametric topic model outperforms its parametric counterpart.
Journal ArticleDOI

Streaming fragment assignment for real-time analysis of sequencing experiments

TL;DR: eXpress is a software package for efficient probabilistic assignment of ambiguously mapping sequenced fragments that can determine abundances of sequenced molecules in real time and can be applied to ChIP-seq, metagenomics and other large-scale sequencing data.
Journal ArticleDOI

A Consolidated Perspective on Multimicrophone Speech Enhancement and Source Separation

TL;DR: This paper proposes to analyze a large number of established and recent techniques according to four transverse axes: 1) the acoustic impulse response model, 2) the spatial filter design criterion, 3) the parameter estimation algorithm, and 4) optional postfiltering.
Journal ArticleDOI

On Particle Methods for Parameter Estimation in State-Space Models

TL;DR: A comprehensive review of particle methods that have been proposed to perform static parameter estimation in state-space models is presented in this article, where the advantages and limitations of these methods are discussed.
Proceedings ArticleDOI

Online EM for Unsupervised Models

TL;DR: It is shown that online variants provide significant speedups and can even find better solutions than those found by batch EM on four unsupervised tasks: part-of-speech tagging, document classification, word segmentation, and word alignment.
References
More filters
Book

The EM algorithm and extensions

TL;DR: The EM Algorithm and Extensions describes the formulation of the EM algorithm, details its methodology, discusses its implementation, and illustrates applications in many statistical contexts, opening the door to the tremendous potential of this remarkably versatile statistical tool.
Book

Statistical analysis of finite mixture distributions

TL;DR: This course discusses Mathematical Aspects of Mixtures, Sequential Problems and Procedures, and Applications of Finite Mixture Models.
Journal ArticleDOI

On the convergence properties of the em algorithm

C. F. Jeff Wu
- 01 Mar 1983 - 
TL;DR: In this paper, the EM algorithm converges to a local maximum or a stationary value of the (incomplete-data) likelihood function under conditions that are applicable to many practical situations.
Journal ArticleDOI

Hierarchical mixtures of experts and the EM algorithm

TL;DR: An Expectation-Maximization (EM) algorithm for adjusting the parameters of the tree-structured architecture for supervised learning and an on-line learning algorithm in which the parameters are updated incrementally.