scispace - formally typeset
Search or ask a question
Topic

Expectation–maximization algorithm

About: Expectation–maximization algorithm is a research topic. Over the lifetime, 11823 publications have been published within this topic receiving 528693 citations. The topic is also known as: EM algorithm & Expectation Maximization.


Papers
More filters
Journal ArticleDOI
TL;DR: In this article, the expectation maximization (EM) method is used to calculate the energy distributions of molecular probes from their adsorption isotherms, and the results are compared to those obtained with the House and Jaycock algorithm HILDA.
Abstract: The expectation-maximization (EM) method of parameter estimation is used to calculate adsorption energy distributions of molecular probes from their adsorption isotherms. EM does not require prior knowledge of the distribution function, or the isotherm, requires no smoothing of the isotherm data, and converges with high stability toward the maximum-likelihood estimate. The method is therefore robust and accurate at high iteration numbers. The EM algorithm is tested with simulated energy distributions corresponding to unimodel Gaussian, bimodal Gaussian, Poisson distributions, and the distributions resulting from Misra isotherms. Theoretical isotherms are generated from these distributions using the Langmuir model, and then chromatographic band profiles are computed using the ideal model of chromatography. Noise is then introduced in the theoretical band profiles comparable to those observed experimentally. The isotherm is then calculated using the elution-by-characteristic points method. The energy distribution given by the EM method is compared to the original one. The results are contrasted to those obtained with the House and Jaycock algorithm HILDA and shown to be superior in terms of both robustness, accuracy, and information theory. 20 refs., 6 figs., 4 tabs.

87 citations

Journal ArticleDOI
TL;DR: In this article, an item response model that incorporates response time was proposed and a parameter estimation procedure using the EM algorithm was developed, evaluated with both real and simulated test data and the results suggest that the estimation procedure works well in estimating model parameters.
Abstract: This article proposes an item response model that incorporates response time. A parameter estimation procedure using the EM algorithm is developed. The procedure is evaluated with both real and simulated test data. The results suggest that the estimation procedure works well in estimating model parameters. By using response time data, estimation of person ability parameters can be improved. Potential applications of this model are discussed. Directions for further study are suggested.

87 citations

Journal ArticleDOI
TL;DR: In this paper, a robust maximum likelihood method for estimating the unbiased mean inclination from inclination-only data was developed, which is able to calculate its value anywhere in the parameter space and for any inclination only data set.
Abstract: SUMMARY We have developed a new robust maximum likelihood method for estimating the unbiased mean inclination from inclination-only data. In paleomagnetic analysis, the arithmetic mean of inclination-only data is known to introduce a shallowing bias. Several methods have been introduced to estimate the unbiased mean inclination of inclination-only data together with measures of the dispersion. Some inclination-only methods were designed to maximize the likelihood function of the marginal Fisher distribution. However, the exact analytical form of the maximum likelihood function is fairly complicated, and all the methods require various assumptions and approximations that are often inappropriate. For some steep and dispersed data sets, these methods provide estimates that are significantly displaced from the peak of the likelihood function to systematically shallower inclination. The problem locating the maximum of the likelihood function is partly due to difficulties in accurately evaluating the function for all values of interest, because some elements of the likelihood function increase exponentially as precision parameters increase, leading to numerical instabilities. In this study, we succeeded in analytically cancelling exponential elements from the log-likelihood function, and we are now able to calculate its value anywhere in the parameter space and for any inclination-only data set. Furthermore, we can now calculate the partial derivatives of the log-likelihood function with desired accuracy, and locate the maximum likelihood without the assumptions required by previous methods. To assess the reliability and accuracy of our method, we generated large numbers of random Fisher-distributed data sets, for which we calculated mean inclinations and precision parameters. The comparisons show that our new robust Arason–Levi maximum likelihood method is the most reliable, and the mean inclination estimates are the least biased towards shallow values.

87 citations

Journal ArticleDOI
TL;DR: In this article, the authors proposed a new approach to the treatment of item non-response in attitude scales, which combines the ideas of latent variable identification with the issues of nonresponse adjustment in sample surveys.
Abstract: Summary. This paper proposes a new approach to the treatment of item non-response in attitude scales. It combines the ideas of latent variable identification with the issues of non-response adjustment in sample surveys. The latent variable approach allows missing values to be included in the analysis and, equally importantly, allows information about attitude to be inferred from nonresponse. We present a symmetric pattern methodology for handling item non-response in attitude scales. The methodology is symmetric in that all the variables are given equivalent status in the analysis (none is designated a ‘dependent’ variable) and is pattern based in that the pattern of responses and non-responses across individuals is a key element in the analysis. Our approach to the problem is through a latent variable model with two latent dimensions: one to summarize response propensity and the other to summarize attitude, ability or belief. The methodology presented here can handle binary, metric and mixed (binary and metric) manifest items with missing values. Examples using both artificial data sets and two real data sets are used to illustrate the mechanism and the advantages of the methodology proposed.

87 citations

Journal ArticleDOI
TL;DR: A nonparametric multiplicative random effects model for the longitudinal process is proposed, which has many applications and leads to a flexible yet parsimoniousNonparametric random effectsmodel, which compares well with the competing parametric longitudinal approaches.
Abstract: In clinical studies, longitudinal biomarkers are often used to monitor disease progression and failure time. Joint modeling of longitudinal and survival data has certain advantages and has emerged as an effective way to mutually enhance information. Typically, a parametric longitudinal model is assumed to facilitate the likelihood approach. However, the choice of a proper parametric model turns out to be more elusive than models for standard longitudinal studies in which no survival endpoint occurs. In this article, we propose a nonparametric multiplicative random effects model for the longitudinal process, which has many applications and leads to a flexible yet parsimonious nonparametric random effects model. A proportional hazards model is then used to link the biomarkers and event time. We use B-splines to represent the nonparametric longitudinal process, and select the number of knots and degrees based on a version of the Akaike information criterion (AIC). Unknown model parameters are estimated through maximizing the observed joint likelihood, which is iteratively maximized by the Monte Carlo Expectation Maximization (MCEM) algorithm. Due to the simplicity of the model structure, the proposed approach has good numerical stability and compares well with the competing parametric longitudinal approaches. The new approach is illustrated with primary biliary cirrhosis (PBC) data, aiming to capture nonlinear patterns of serum bilirubin time courses and their relationship with survival time of PBC patients.

87 citations


Network Information
Related Topics (5)
Estimator
97.3K papers, 2.6M citations
91% related
Deep learning
79.8K papers, 2.1M citations
84% related
Support vector machine
73.6K papers, 1.7M citations
84% related
Cluster analysis
146.5K papers, 2.9M citations
84% related
Artificial neural network
207K papers, 4.5M citations
82% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023114
2022245
2021438
2020410
2019484
2018519