scispace - formally typeset
Search or ask a question
Topic

Expectation–maximization algorithm

About: Expectation–maximization algorithm is a research topic. Over the lifetime, 11823 publications have been published within this topic receiving 528693 citations. The topic is also known as: EM algorithm & Expectation Maximization.


Papers
More filters
Journal ArticleDOI
TL;DR: In this article, the contribution of each source to all mixture channels in the time-frequency domain was modeled as a zero-mean Gaussian random variable whose covariance encodes the spatial characteristics of the source.
Abstract: This paper addresses the modeling of reverberant recording environments in the context of under-determined convolutive blind source separation. We model the contribution of each source to all mixture channels in the time-frequency domain as a zero-mean Gaussian random variable whose covariance encodes the spatial characteristics of the source. We then consider four specific covariance models, including a full-rank unconstrained model. We derive a family of iterative expectation-maximization (EM) algorithms to estimate the parameters of each model and propose suitable procedures adapted from the state-of-the-art to initialize the parameters and to align the order of the estimated sources across all frequency bins. Experimental results over reverberant synthetic mixtures and live recordings of speech data show the effectiveness of the proposed approach.

368 citations

Proceedings Article
16 Jun 2012
TL;DR: In this article, a method of moments approach is proposed for parameter estimation for a broad class of high-dimensional mixture models with many components, including multi-view mixtures of Gaussians and hidden Markov models.
Abstract: Mixture models are a fundamental tool in applied statistics and machine learning for treating data taken from multiple subpopulations. The current practice for estimating the parameters of such models relies on local search heuristics (e.g., the EM algorithm) which are prone to failure, and existing consistent methods are unfavorable due to their high computational and sample complexity which typically scale exponentially with the number of mixture components. This work develops an ecient method of moments approach to parameter estimation for a broad class of high-dimensional mixture models with many components, including multi-view mixtures of Gaussians (such as mixtures of axis-aligned Gaussians) and hidden Markov models. The new method leads to rigorous unsupervised learning results for mixture models that were not achieved by previous works; and, because of its simplicity, it oers a viable alternative to EM for practical deployment.

363 citations

Journal ArticleDOI
TL;DR: Two algorithms for maximum likelihood (ML) and maximum a posteriori (MAP) estimation are described, which make use of the tractability of the complete data likelihood to maximize the observed data likelihood.
Abstract: This paper presents a new class of models for persons-by-items data. The essential new feature of this class is the representation of the persons: every person is represented by its membership tomultiple latent classes, each of which belongs to onelatent classification. The models can be considered as a formalization of the hypothesis that the responses come about in a process that involves the application of a number ofmental operations. Two algorithms for maximum likelihood (ML) and maximum a posteriori (MAP) estimation are described. They both make use of the tractability of the complete data likelihood to maximize the observed data likelihood. Properties of the MAP estimators (i.e., uniqueness and goodness-of-recovery) and the existence of asymptotic standard errors were examined in a simulation study. Then, one of these models is applied to the responses to a set of fraction addition problems. Finally, the models are compared to some related models in the literature.

363 citations

Journal ArticleDOI
TL;DR: The simulation demonstrated that maximum likelihood estimation and multiple imputation methods produce the most efficient and least biased estimates of variances and covariances for normally distributed and slightly skewed data when data are missing completely at random (MCAR).
Abstract: Researchers often face a dilemma: Should they collect little data and emphasize quality, or much data at the expense of quality? The utility of the 3-form design coupled with maximum likelihood methods for estimation of missing values was evaluated. In 3-form design surveys, four sets of items. X, A, B, and C are administered: Each third of the subjects receives X and one combination of two other item sets - AB, BC, or AC. Variances and covariances were estimated with pairwise deletion, mean replacement, single imputation, multiple imputation, raw data maximum likelihood, multiple-group covariance structure modeling, and Expectation-Maximization (EM) algorithm estimation. The simulation demonstrated that maximum likelihood estimation and multiple imputation methods produce the most efficient and least biased estimates of variances and covariances for normally distributed and slightly skewed data when data are missing completely at random (MCAR). Pairwise deletion provided equally unbiased estimates but was less efficient than ML procedures. Further simulation results demonstrated that nun-maximum likelihood methods break down when data are not missing completely at random. Application of these methods with empirical drug use data resulted in similar covariance matrices for pairwise and EM estimation, however, ML estimation produced better and more efficient regression estimates. Maximum likelihood estimation or multiple imputation procedures. which are now becoming more readily available, are always recommended. In order to maximize the efficiency of the ML parameter estimates, it is recommended that scale items be split across forms rather than being left intact within forms.

363 citations

Journal ArticleDOI
TL;DR: An adaptation of a new expectation-maximization based competitive mixture decomposition algorithm is introduced and it is shown that it efficiently and reliably performs mixture decompositions of t-distributions.

360 citations


Network Information
Related Topics (5)
Estimator
97.3K papers, 2.6M citations
91% related
Deep learning
79.8K papers, 2.1M citations
84% related
Support vector machine
73.6K papers, 1.7M citations
84% related
Cluster analysis
146.5K papers, 2.9M citations
84% related
Artificial neural network
207K papers, 4.5M citations
82% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023114
2022245
2021438
2020410
2019484
2018519