scispace - formally typeset
Search or ask a question
Topic

Expectation–maximization algorithm

About: Expectation–maximization algorithm is a research topic. Over the lifetime, 11823 publications have been published within this topic receiving 528693 citations. The topic is also known as: EM algorithm & Expectation Maximization.


Papers
More filters
Journal ArticleDOI
TL;DR: Simulation results have shown that the use of the proposed learning algorithms avoids the instability problem and makes the ME architecture produce good performance in multiclass classification and the approximation algorithm leads to fast learning.

134 citations

Journal ArticleDOI
TL;DR: This note compares two choices of basis for models parameterized by probabilities, showing that it is possible to improve on the traditional choice, the probability simplex, by transforming to the 'softmax' basis.
Abstract: Maximum a posteriori optimization of parameters and the Laplace approximation for the marginal likelihood are both basis-dependent methods. This note compares two choices of basis for models parameterized by probabilities, showing that it is possible to improve on the traditional choice, the probability simplex, by transforming to the ‘softmax’ basis.

134 citations

Journal ArticleDOI
TL;DR: A new approach to modeling and processing multimedia data based on graphical models that combine audio and video variables is presented, and a new algorithm for tracking a moving object in a cluttered, noisy scene using two microphones and a camera is developed.
Abstract: We present a new approach to modeling and processing multimedia data. This approach is based on graphical models that combine audio and video variables. We demonstrate it by developing a new algorithm for tracking a moving object in a cluttered, noisy scene using two microphones and a camera. Our model uses unobserved variables to describe the data in terms of the process that generates them. It is therefore able to capture and exploit the statistical structure of the audio and video data separately, as well as their mutual dependencies. Model parameters are learned from data via an EM algorithm, and automatic calibration is performed as part of this procedure. Tracking is done by Bayesian inference of the object location from data. We demonstrate successful performance on multimedia clips captured in real world scenarios using off-the-shelf equipment.

134 citations

Journal ArticleDOI
TL;DR: In this paper, the concavity of the likelihood function is proved by means of an inequality involving the trigamma function, and the computation of maximum likelihood estimates is discussed.
Abstract: Global concavity of the likelihood function is proved by means of an inequality involving the trigamma function. The computation of maximum likelihood estimates is discussed.

134 citations

Journal ArticleDOI
TL;DR: In this paper, a class of Bayesian multiscale models (BMSM's) for one-dimensional inhomogeneous Poisson processes is introduced, where the focus is on estimating the (discretized) intensity function underlying the process.
Abstract: I introduce a class of Bayesian multiscale models (BMSM's) for one-dimensional inhomogeneous Poisson processes. The focus is on estimating the (discretized) intensity function underlying the process. Unlike the usual transform-based approach at the heart of most wavelet-based methods for Gaussian data, these BMSM's are constructed using recursive dyadic partitions (RDP's) within an entirely likelihood-based framework. Each RDP may be associated with a binary tree, and a new multiscale prior distribution is introduced for the unknown intensity through the placement of mixture distributions at each of the nodes of the tree. The concept of model mixing is then applied to a complete collection of such trees. In addition to allowing for the inclusion of full location/scale information in the model, this last step also is fundamental both in inducing stationarity in the prior distribution and in enabling a given intensity function to be approximated at the resolution of the data. Under squared-error lo...

133 citations


Network Information
Related Topics (5)
Estimator
97.3K papers, 2.6M citations
91% related
Deep learning
79.8K papers, 2.1M citations
84% related
Support vector machine
73.6K papers, 1.7M citations
84% related
Cluster analysis
146.5K papers, 2.9M citations
84% related
Artificial neural network
207K papers, 4.5M citations
82% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023114
2022245
2021438
2020410
2019484
2018519