scispace - formally typeset
Search or ask a question
Topic

Expectation–maximization algorithm

About: Expectation–maximization algorithm is a research topic. Over the lifetime, 11823 publications have been published within this topic receiving 528693 citations. The topic is also known as: EM algorithm & Expectation Maximization.


Papers
More filters
Journal ArticleDOI
TL;DR: In this article, the authors studied the asymptotic properties of maximum likelihood estimators of parameters when observations are taken from a two-dimensional Gaussian random field with a multiplicative Ornstein-Uhlenbeck covariance function.
Abstract: We study in detail asymptotic properties of maximum likelihood estimators of parameters when observations are taken from a two-dimensional Gaussian random field with a multiplicative Ornstein-Uhlenbeck covariance function. Under the complete lattice sampling plan, it is shown that the maximum likelihood estimators are strongly consistent and asymptotically normal. The asymptotic normality here is normalized by the fourth root of the sample size and is obtained through higher order expansions of the likelihood score equations. Extensions of these results to higher-dimensional processes are also obtained, showing that the convergence rate becomes better as the dimension gets higher.

98 citations

Journal ArticleDOI
TL;DR: This paper addresses the problem of in-network distributed estimation for sparse vectors, and develops several distributed sparse recursive least-squares (RLS) algorithms based on the maximum likelihood framework, and the expectation-maximization algorithm is used to numerically solve the sparse estimation problem.
Abstract: Distributed estimation over networks has received much attention in recent years due to its broad applicability. Many signals in nature present high level of sparsity, which contain only a few large coefficients among many negligible ones. In this paper, we address the problem of in-network distributed estimation for sparse vectors, and develop several distributed sparse recursive least-squares (RLS) algorithms. The proposed algorithms are based on the maximum likelihood framework, and the expectation-maximization algorithm, with the aid of thresholding operators, is used to numerically solve the sparse estimation problem. To improve the estimation performance, the thresholding operators related to l0- and l1-norms with real-time self-adjustable thresholds are derived. With these thresholding operators, we can exploit the underlying sparsity to implement the distributed estimation with low computational complexity and information exchange amount among neighbors. The sparsity-promoting intensity is also adaptively adjusted so that a good performance of the sparse solution can be achieved. Both theoretical analysis and numerical simulations are presented to show the effectiveness of the proposed algorithms.

98 citations

Journal ArticleDOI
TL;DR: This work formulate the effects of potentially time-dependent covariates on the interval-censored failure time through a broad class of semiparametric transformation models that encompasses proportional hazards and proportional odds models, and devise an EM-type algorithm that converges stably, even in the presence of time- dependent covariates.
Abstract: Interval censoring arises frequently in clinical, epidemiological, financial and sociological studies, where the event or failure of interest is known only to occur within an interval induced by periodic monitoring. We formulate the effects of potentially time-dependent covariates on the interval-censored failure time through a broad class of semiparametric transformation models that encompasses proportional hazards and proportional odds models. We consider nonparametric maximum likelihood estimation for this class of models with an arbitrary number of monitoring times for each subject. We devise an EM-type algorithm that converges stably, even in the presence of time-dependent covariates, and show that the estimators for the regression parameters are consistent, asymptotically normal, and asymptotically efficient with an easily estimated covariance matrix. Finally, we demonstrate the performance of our procedures through simulation studies and application to an HIV/AIDS study conducted in Thailand.

98 citations

Journal ArticleDOI
TL;DR: In this article, a family of distributions on [1, 1] with a continuous one-dimensional parameterization that joins the triangular distribution (when Θ = 0) to the uniform distribution, for which the maximum likelihood estimates exist and converge strongly to Θ ≥ 1 as the sample size tends to infinity, whatever be the true value of the parameter.
Abstract: An example is given of a family of distributions on [— 1, 1] with a continuous one-dimensional parameterization that joins the triangular distribution (when Θ = 0) to the uniform (when Θ = 1), for which the maximum likelihood estimates exist and converge strongly to Θ = 1 as the sample size tends to infinity, whatever be the true value of the parameter. A modification that satisfies Cramer's conditions is also given.

97 citations

Journal ArticleDOI
TL;DR: In this paper, an iterative algorithm for high-dimensional linear inverse problems, which is regularized by a differentiable discrete approximation of the total variation (TV) penalty, is presented.
Abstract: This paper describes an iterative algorithm for high-dimensional linear inverse problems, which is regularized by a differentiable discrete approximation of the total variation (TV) penalty. The algorithm is an interlaced iterative method based on optimization transfer with a separable quadratic surrogate for the TV penalty. The surrogate cost function is optimized using the block iterative regularized algebraic reconstruction technique (RSART). A proof of convergence is given and convergence is illustrated by numerical experiments with simulated parallel-beam computerized tomography (CT) data. The proposed method provides a block-iterative and convergent, hence efficient and reliable, algorithm to investigate the effects of TV regularization in applications such as CT.

97 citations


Network Information
Related Topics (5)
Estimator
97.3K papers, 2.6M citations
91% related
Deep learning
79.8K papers, 2.1M citations
84% related
Support vector machine
73.6K papers, 1.7M citations
84% related
Cluster analysis
146.5K papers, 2.9M citations
84% related
Artificial neural network
207K papers, 4.5M citations
82% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023114
2022245
2021438
2020410
2019484
2018519