Topic
Expectation–maximization algorithm
About: Expectation–maximization algorithm is a research topic. Over the lifetime, 11823 publications have been published within this topic receiving 528693 citations. The topic is also known as: EM algorithm & Expectation Maximization.
Papers published on a yearly basis
Papers
More filters
••
TL;DR: The goal of this paper is to perform a segmentation of atherosclerotic plaques in view of evaluating their burden and to provide boundaries for computing properties such as the plaque deformation and elasticity distribution (elastogram and modulogram).
Abstract: The goal of this paper is to perform a segmentation of atherosclerotic plaques in view of evaluating their burden and to provide boundaries for computing properties such as the plaque deformation and elasticity distribution (elastogram and modulogram). The echogenicity of a region of interest comprising the plaque, the vessel lumen, and the adventitia of the artery wall in an ultrasonic B-mode image was modeled by mixtures of three Nakagami distributions, which yielded the likelihood of a Bayesian segmentation model. The main contribution of this paper is the estimation of the motion field and its integration into the prior of the Bayesian model that included a local geometrical smoothness constraint, as well as an original spatiotemporal cohesion constraint. The Maximum A Posteriori of the proposed model was computed with a variant of the exploration/selection algorithm. The starting point is a manual segmentation of the first frame. The proposed method was quantitatively compared with manual segmentations of all frames by an expert technician. Various measures were used for this evaluation, including the mean point-to-point distance and the Hausdorff distance. Results were evaluated on 94 sequences of 33 patients (for a total of 8988 images). We report a mean point-to-point distance of 0.24 ± 0.08 mm and a Hausdorff distance of 1.24 ± 0.40 mm. Our tests showed that the algorithm was not sensitive to the degree of stenosis or calcification.
99 citations
••
TL;DR: In this article, an R package called bivpois is presented for maximum likelihood estimation of the parameters of bivariate and diagonal inflated bivariate Poisson regression models, and an Expectation-Maximization (EM) algorithm is implemented.
Abstract: In this paper we present an R package called bivpois for maximum likelihood estimation of the parameters of bivariate and diagonal inflated bivariate Poisson regression models. An Expectation-Maximization (EM) algorithm is implemented. Inflated models allow for modelling both over-dispersion (or under-dispersion) and negative correlation and thus they are appropriate for a wide range of applications. Extensions of the algorithms for several other models are also discussed. Detailed guidance and implementation on simulated and real data sets using bivpois package is provided.
99 citations
••
TL;DR: There are strong relationships between approaches to optmization and learning based on statistical physics or mixtures of experts, and the EM algorithm can be interpreted as converging either to a local maximum of the mixtures model or to a saddle point solution to the statistical physics system.
Abstract: We show that there are strong relationships between approaches to optmization and learning based on statistical physics or mixtures of experts. In particular, the EM algorithm can be interpreted as converging either to a local maximum of the mixtures model or to a saddle point solution to the statistical physics system. An advantage of the statistical physics approach is that it naturally gives rise to a heuristic continuation method, deterministic annealing, for finding good solutions.
99 citations
•
01 Dec 2010TL;DR: In this paper, the authors apply the expectation maximization algorithm to iterate between inference in the latent state-space and learning the parameters of the underlying GP dynamics model, and propose a new general methodology for inference and learning in nonlinear statespace models that are described probabilistically by non-parametric GP models.
Abstract: State-space inference and learning with Gaussian processes (GPs) is an unsolved problem. We propose a new, general methodology for inference and learning in nonlinear state-space models that are described probabilistically by non-parametric GP models. We apply the expectation maximization algorithm to iterate between inference in the latent state-space and learning the parameters of the underlying GP dynamics model. Copyright 2010 by the authors.
99 citations
••
TL;DR: A blind channel estimator based on the expectation maximization algorithm to acquire the modulus values of channel parameters is proposed and the ranges of the initial values of the suggested estimator are obtained and the modified Bayesian Cramér–Rao bound is derived.
Abstract: The availability of perfect channel state information is assumed in current ambient-backscatter studies. However, the channel estimation problem for ambient backscatter is radically different from that for traditional wireless systems, where it is common to transmit training (pilot) symbols for this purpose. In this letter, we thus propose a blind channel estimator based on the expectation maximization algorithm to acquire the modulus values of channel parameters. We also obtain the ranges of the initial values of the suggested estimator and derive the modified Bayesian Cramer–Rao bound of the proposed estimator. Finally, simulation results are provided to corroborate our theoretical studies.
99 citations