scispace - formally typeset
Search or ask a question
Topic

Expectation–maximization algorithm

About: Expectation–maximization algorithm is a research topic. Over the lifetime, 11823 publications have been published within this topic receiving 528693 citations. The topic is also known as: EM algorithm & Expectation Maximization.


Papers
More filters
Journal ArticleDOI
TL;DR: In this paper, various types of finite mixtures of confirmatory factor-analysis models are proposed for handling data heterogeneity, and three different sampling schemes for these mixture models are distinguished.
Abstract: In this paper, various types of finite mixtures of confirmatory factor-analysis models are proposed for handling data heterogeneity. Under the proposed mixture approach, observations are assumed to be drawn from mixtures of distinct confirmatory factor-analysis models. But each observation does not need to be identified to a particular model prior to model fitting. Several classes of mixture models are proposed. These models differ by their unique representations of data heterogeneity. Three different sampling schemes for these mixture models are distinguished. A mixed type of the these three sampling schemes is considered throughout this article. The proposed mixture approach reduces to regular multiple-group confirmatory factor-analysis under a restrictive sampling scheme, in which the structural equation model for each observation is assumed to be known. By assuming a mixture of multivariate normals for the data, maximum likelihood estimation using the EM (Expectation-Maximization) algorithm and the AS (Approximate-Scoring) method are developed, respectively. Some mixture models were fitted to a real data set for illustrating the application of the theory. Although the EM algorithm and the AS method gave similar sets of parameter estimates, the AS method was found computationally more efficient than the EM algorithm. Some comments on applying the mixture approach to structural equation modeling are made.

207 citations

Journal ArticleDOI
TL;DR: Simulation studies in the context of channel estimation, employing multipath wireless channels, show that the SPARLS algorithm has significant improvement over the conventional widely used recursive least squares (RLS) algorithm in terms of mean squared error (MSE).
Abstract: We develop a recursive L1-regularized least squares (SPARLS) algorithm for the estimation of a sparse tap-weight vector in the adaptive filtering setting. The SPARLS algorithm exploits noisy observations of the tap-weight vector output stream and produces its estimate using an expectation-maximization type algorithm. We prove the convergence of the SPARLS algorithm to a near-optimal estimate in a stationary environment and present analytical results for the steady state error. Simulation studies in the context of channel estimation, employing multipath wireless channels, show that the SPARLS algorithm has significant improvement over the conventional widely used recursive least squares (RLS) algorithm in terms of mean squared error (MSE). Moreover, these simulation studies suggest that the SPARLS algorithm (with slight modifications) can operate with lower computational requirements than the RLS algorithm, when applied to tap-weight vectors with fixed support.

206 citations

Journal ArticleDOI
TL;DR: An approximation to the prior/posterior distribution of the parameters in the beta distribution is introduced and an analytically tractable (closed form) Bayesian approach to the parameter estimation is proposed.
Abstract: Bayesian estimation of the parameters in beta mixture models (BMM) is analytically intractable. The numerical solutions to simulate the posterior distribution are available, but incur high computational cost. In this paper, we introduce an approximation to the prior/posterior distribution of the parameters in the beta distribution and propose an analytically tractable (closed form) Bayesian approach to the parameter estimation. The approach is based on the variational inference (VI) framework. Following the principles of the VI framework and utilizing the relative convexity bound, the extended factorized approximation method is applied to approximate the distribution of the parameters in BMM. In a fully Bayesian model where all of the parameters of the BMM are considered as variables and assigned proper distributions, our approach can asymptotically find the optimal estimate of the parameters posterior distribution. Also, the model complexity can be determined based on the data. The closed-form solution is proposed so that no iterative numerical calculation is required. Meanwhile, our approach avoids the drawback of overfitting in the conventional expectation maximization algorithm. The good performance of this approach is verified by experiments with both synthetic and real data.

206 citations

Journal ArticleDOI
TL;DR: In this paper, it was shown that for some of these models maximum likelihood estimates always exist and for some others they exist provided certain degeneracies in the data do not occur, and that if it does exist it is unique.
Abstract: SUMMARY Generalized linear models were defined by Nelder & Wedderburn (1972) and include a large class of useful statistical models. It is shown that for some of these models maximum likelihood estimates always exist and that for some others they exist provided certain degeneracies in the data do not occur. Similar results are obtained for uniqueness of maximum likelihood estimates and for their being confined to the interior of the parameter space. For instance, with the familiar model of probit analysis it is shown that under certain conditions the maximum likelihood estimate exists, and that if it does exist it is unique. The models considered also include those involving the normal, Poisson and gamma distributions with power and logarithmic transformations to linearity.

206 citations

Journal ArticleDOI
TL;DR: In this paper, conditions for the existence of a unique global and local minimum are given for a mixed autoregressive moving average (MAMA) model with respect to both local and global extrema.
Abstract: Estimation of the parameters in a mixed autoregressive moving average process leads to a nonlinear optimization problem. The negative logarithm of the likelihood function, suitably normalized, converges to a deterministic function as the sample length increases. The local and global extrema of this function are investigated. Conditions for the existence of a unique global and local minimum are given.

206 citations


Network Information
Related Topics (5)
Estimator
97.3K papers, 2.6M citations
91% related
Deep learning
79.8K papers, 2.1M citations
84% related
Support vector machine
73.6K papers, 1.7M citations
84% related
Cluster analysis
146.5K papers, 2.9M citations
84% related
Artificial neural network
207K papers, 4.5M citations
82% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023114
2022245
2021438
2020410
2019484
2018519