scispace - formally typeset
Search or ask a question
Topic

Expectation–maximization algorithm

About: Expectation–maximization algorithm is a research topic. Over the lifetime, 11823 publications have been published within this topic receiving 528693 citations. The topic is also known as: EM algorithm & Expectation Maximization.


Papers
More filters
Journal ArticleDOI
TL;DR: A novel statistical approach for automatic vehicle detection based on local features that are located within three significant subregions of the image by eliminating the requirement for an ICA residual image reconstruction process and by computing the likelihood probability using a weighted Gaussian mixture model.
Abstract: This paper develops a novel statistical approach for automatic vehicle detection based on local features that are located within three significant subregions of the image. In the detection process, each subregion is projected onto its associated eigenspace and independent basis space to generate a principal components analysis (PCA) weight vector and an independent component analysis (ICA) coefficient vector, respectively. A likelihood evaluation process is then performed based on the estimated joint probability of the projection weight vectors and the coefficient vectors of the subregions with position information. The use of subregion position information minimizes the risk of false acceptances, whereas the use of PCA to model the low-frequency components of the eigenspace and ICA to model the high-frequency components of the residual space improves the tolerance of the detection process toward variations in the illumination conditions and vehicle pose. The use of local features not only renders the system more robust toward partial occlusions but also reduces the computational overhead. The computational costs are further reduced by eliminating the requirement for an ICA residual image reconstruction process and by computing the likelihood probability using a weighted Gaussian mixture model, whose parameters and weights are iteratively estimated using an expectation-maximization algorithm.

136 citations

Journal ArticleDOI
Camil Fuchs1
TL;DR: In this article, the problem of obtaining maximum likelihood estimates (MLE) for the parameters of log-linear models under this type of incomplete data is addressed and the appropriate systems of equations are presented and the expectation-maximization (EM) algorithm (Dempster, Laird, and Rubin 1977) is suggested as one of the possible methods for solving them.
Abstract: In many studies the values of one or more variables are missing for subsets of the original sample. This article focuses on the problem of obtaining maximum likelihood estimates (MLE) for the parameters of log-linear models under this type of incomplete data. The appropriate systems of equations are presented and the expectation-maximization (EM) algorithm (Dempster, Laird, and Rubin 1977) is suggested as one of the possible methods for solving them. The algorithm has certain advantages but other alternatives may be computationally more effective. Tests of fit for log-linear models in the presence of incomplete data are considered. The data from the Protective Services Project for Older Persons (Blenkner, Bloom, and Nielsen 1971; Blenkner, Bloom, and Weber 1974) are used to illustrate the procedures discussed in the article.

136 citations

Journal ArticleDOI
TL;DR: Latentnet is a package to fit and evaluate statistical latent position and cluster models for networks and provides a Bayesian way of assessing how many groups there are, and thus whether or not there is any clustering.
Abstract: latentnet is a package to fit and evaluate statistical latent position and cluster models for networks. Hoff, Raftery, and Handcock (2002) suggested an approach to modeling networks based on positing the existence of an latent space of characteristics of the actors. Relationships form as a function of distances between these characteristics as well as functions of observed dyadic level covariates. In latentnet social distances are represented in a Euclidean space. It also includes a variant of the extension of the latent position model to allow for clustering of the positions developed in Handcock, Raftery, and Tantrum (2007). The package implements Bayesian inference for the models based on an Markov chain Monte Carlo algorithm. It can also compute maximum likelihood estimates for the latent position model and a two-stage maximum likelihood method for the latent position cluster model. For latent position cluster models, the package provides a Bayesian way of assessing how many groups there are, and thus whether or not there is any clustering (since if the preferred number of groups is 1, there is little evidence for clustering). It also estimates which cluster each actor belongs to. These estimates are probabilistic, and provide the probability of each actor belonging to each cluster. It computes four types of point estimates for the coefficients and positions: maximum likelihood estimate, posterior mean, posterior mode and the estimator which minimizes Kullback-Leibler divergence from the posterior. You can assess the goodness-of-fit of the model via posterior predictive checks. It has a function to simulate networks from a latent position or latent position cluster model.

135 citations

Journal ArticleDOI
TL;DR: The method of weights is an implementation of the EM algorithm for general maximum-likelihood analysis of regression models, including generalized linear models (GLMs) with incomplete covariates.
Abstract: Missing data is a common occurrence in most medical research data collection enterprises. There is an extensive literature concerning missing data, much of which has focused on missing outcomes. Covariates in regression models are often missing, particularly if information is being collected from multiple sources. The method of weights is an implementation of the EM algorithm for general maximum-likelihood analysis of regression models, including generalized linear models (GLMs) with incomplete covariates. In this paper, we will describe the method of weights in detail, illustrate its application with several examples, discuss its advantages and limitations, and review extensions and applications of the method.

135 citations

Journal ArticleDOI
TL;DR: In this article, a particle filter approach for approximating the first-order moment of a joint, or probability hypothesis density (PHD), has demonstrated a feasible suboptimal method for tracking a time-varying number of targets in real-time.
Abstract: Particle filter approaches for approximating the first-order moment of a joint, or probability hypothesis density (PHD), have demonstrated a feasible suboptimal method for tracking a time-varying number of targets in real-time. We consider two techniques for estimating the target states at each iteration, namely k-means clustering and mixture modelling via the expectation-maximization (EM) algorithm. We present novel techniques for associating the targets between frames to enable track continuity.

135 citations


Network Information
Related Topics (5)
Estimator
97.3K papers, 2.6M citations
91% related
Deep learning
79.8K papers, 2.1M citations
84% related
Support vector machine
73.6K papers, 1.7M citations
84% related
Cluster analysis
146.5K papers, 2.9M citations
84% related
Artificial neural network
207K papers, 4.5M citations
82% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023114
2022245
2021438
2020410
2019484
2018519