scispace - formally typeset
Search or ask a question
Topic

Expectation–maximization algorithm

About: Expectation–maximization algorithm is a research topic. Over the lifetime, 11823 publications have been published within this topic receiving 528693 citations. The topic is also known as: EM algorithm & Expectation Maximization.


Papers
More filters
Journal ArticleDOI
TL;DR: The connections of the alternative model for mixture of experts (ME) to the normalized radial basis function (NRBF) nets and extended normalized RBF (ENRBF) nets are established, and the well-known expectation-maximization (EM) algorithm for maximum likelihood learning is suggested to the two types of RBF nets.

111 citations

Journal ArticleDOI
TL;DR: In this study, slow features as temporally correlated LVs are derived using probabilistic slow feature analysis to represent nominal variations of processes, some of which are potentially correlated to quality variables and hence help improving the prediction performance of soft sensors.
Abstract: Latent variable (LV) models provide explicit representations of underlying driving forces of process variations and retain the dominant information of process data In this study, slow features as temporally correlated LVs are derived using probabilistic slow feature analysis Slow features evolving in a state-space form effectively represent nominal variations of processes, some of which are potentially correlated to quality variables and hence help improving the prediction performance of soft sensors An efficient EM algorithm is proposed to estimate parameters of the probabilistic model, which turns out to be suitable for analyzing massive process data Two criteria are ∗To whom correspondence should be addressed †Tsinghua University ‡University of Alberta 1 also proposed to select quality-relevant slow features The validity and advantages of the proposed method are demonstrated via two case studies

111 citations

Journal ArticleDOI
TL;DR: In this article, the authors construct the likelihood function for the conditional maximum likelihood estimator in dynamic, unobserved effects models where not all conditioning variables are strictly exogenous, and propose a method for handling the initial conditions problem, which offers a flexible, relatively simple alternative to existing methods.

111 citations

Journal ArticleDOI
TL;DR: The authors propose to use a robust parameter-estimation method for the mixture model, which assigns full weight to training samples, but automatically gives reduced weight to unlabeled samples, and shows that the robust method prevents performance deterioration due to statistical outliers in the data as compared to the estimates obtained from the direct EM approach.
Abstract: In pattern recognition, when the ratio of the number of training samples to the dimensionality is small, parameter estimates become highly variable, causing the deterioration of classification performance. This problem has become more prevalent in remote sensing with the emergence of a new generation of sensors with as many as several hundred spectral bands. While the new sensor technology provides higher spectral and spatial resolution, enabling a greater number of spectrally separable classes to be identified, the needed labeled samples for designing the classifier remain difficult and expensive to acquire. Better parameter estimates can be obtained by exploiting a large number of unlabeled samples in addition to training samples, using the expectation maximization algorithm under the mixture model. However, the estimation method is sensitive to the presence of statistical outliers. In remote sensing data, miscellaneous classes with few samples are often difficult to identify and may constitute statistical outliers. Therefore, the authors propose to use a robust parameter-estimation method for the mixture model. The proposed method assigns full weight to training samples, but automatically gives reduced weight to unlabeled samples. Experimental results show that the robust method prevents performance deterioration due to statistical outliers in the data as compared to the estimates obtained from the direct EM approach.

111 citations

Journal ArticleDOI
TL;DR: In this article, a modified score function is used as an estimating equation for the shape parameter and it is proved that the resulting modified maximum likelihood estimator is always finite with positive probability.

111 citations


Network Information
Related Topics (5)
Estimator
97.3K papers, 2.6M citations
91% related
Deep learning
79.8K papers, 2.1M citations
84% related
Support vector machine
73.6K papers, 1.7M citations
84% related
Cluster analysis
146.5K papers, 2.9M citations
84% related
Artificial neural network
207K papers, 4.5M citations
82% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023114
2022245
2021438
2020410
2019484
2018519