scispace - formally typeset
Search or ask a question

Showing papers on "Expectation–maximization algorithm published in 1975"


Journal ArticleDOI
TL;DR: An algorithm is described for generating fuzzy partitions which extremize a fuzzy extension of the k-means squared-error criterion function on finite data sets X, and the behavior of the algorithm is compared with that of the ordinary ISODATA clustering process and the maximum likelihood method.
Abstract: An algorithm is described for generating fuzzy partitions which extremize a fuzzy extension of the k-means squared-error criterion function on finite data sets X. It is shown how this algorithm may be applied to the problem of estimating the parameters (a priori probabilities, means, and covariances) of mixture of multivariate normal densities, given a finite sample X drawn from the mixture. The behavior of the algorithm is compared with that of the ordinary ISODATA clustering process and the maximum likelihood method, for a specific bivariate mixture.

236 citations


Journal ArticleDOI
TL;DR: In this article, the authors investigate the large-sample behavior of maximum likelihood estimates (MLE's) of the parameters of a diffusion process, which is observed throughout continuous time. But their results correspond exactly to classical asymptotic likelihood results, and follow easily from a central limit theorem for stochastic integrals.
Abstract: We investigate the large-sample behaviour of maximum likelihood estimates (MLE's) of the parameters of a diffusion process, which is observed throughout continuous time. The results (limit normal distribution for the MLE and an asymptotic chi-squared likelihood ratio test) correspond exactly to classical asymptotic likelihood results, and follow easily from a central limit theorem for stochastic integrals. DIFFUSION PROCESSES; MAXIMUM LIKELIHOOD ESTIMATES; BROWNIAN MOTION; STATIONARITY; ERGODICITY; STOCHASTIC INTEGRALS; RADON-NIKODYM DERIVATIVES; MARTINGALE CENTRAL LIMIT THEOREM; LIKELIHOOD RATIO TEST

97 citations


Journal ArticleDOI
TL;DR: In this paper, the Fisher scoring algorithm was used for maximum likelihood estimation in Poisson, quantal response, multinomial, and log-linear models, and in some cases it is also the Newton-Raphson algorithm.
Abstract: Methods are given for using readily available nonlinear regression programs to produce maximum likelihood estimates in a rather natural way. Used as suggested the common Gauss-Newton algorithm for nonlinear least squares becomes the Fisher scoring algorithm for maximum likelihood estimation. In some cases it is also the Newton-Raphson algorithm. The standard errors produced are the information theory standard errors up to a possible common multiple. This means that much of the auxiliary output produced by a nonlinear least squares analysis is directly applicable to a maximum likelihood analysis. Illustrative applications to Poisson, quantal response, multinomial, and log-linear models are given.

48 citations


Journal ArticleDOI
TL;DR: In this paper, the authors give a proof of a basic theorem in the theory of maximum likelihood estimation of vector-valued parameters, and an alternate statement and proof of the theorem are given.
Abstract: Two examples are given from the literature in which results concerning functions of a single variable are applied incorrectly to multivariate functions. One of the examples concerns the proof of a basic theorem in the theory of maximum likelihood estimation of vector-valued parameters. An alternate statement and proof of the theorem are given.

23 citations


Journal ArticleDOI
TL;DR: In this article, the authors discussed Krutchkoffering an inverse estimator for the unknown r, in place of the classical maximum likelihood estimator, from the point of view of the likelihood approach and found that the question of which estimator is better turns out to be of little importance.
Abstract: In the linear calibration problem, when the regression variable y is related to the independent variable x by the equation likelihood methods are used to make inferences about an unknown value of x given the corresponding observed values of y and previous observed pairs (y, x). Krutchkoff's suggestion of using an inverse estimator for the unknown r, in place of the classical maximum likelihood estimator is discussed from the point of view of the likelihood approach. From a likelihood point of view the question of which estimator is better turns out to be of little importance. It is also found that a good number of the cases considered by Krutchkoff give non-informative likelihood functions, but cases which the authors feel are more common in practice tend to give likelihood functions which are informative and approximately normal in shape.

21 citations



Journal ArticleDOI
TL;DR: In this article, a procedure for overcoming these estimation anomalies is presented, which employs interior penalty function techniques, and the numerical examples also indicate that the procedure is robust, and extensive practical experience with the procedure supports this conclusion.
Abstract: The maximum likelihood statistical modelling problems for the three-and four-parameter lognormal distributions possess the unusual future that the theoretical maximum likelihood parameter estimates are inadmissible values for which the likelihood function is postively infinite. Accordingly, “local-maximum” likelihood parameter estimates corresponding to the primary relative maximum of the likelihood function must be derived. Although these estimates often possess many of the properties associated with ordinary maximum likelihood estimates, their numerical computation is not altogether straightforward since solutions corresponding to the global maximum of the likelihood function must be avoided. A procedure for overcoming these estimation anomalies is presented. The numerical examples also presented and discussed indicate that the procedure, which employs interior penalty function techniques, is robust. Extensive practical experience with the procedure supports this conclusion.

18 citations


01 Jan 1975
TL;DR: In this article, a maximum likelihood estimation technique is used for the analysis of agricultural remote sensor data and the m-class probability of misclassification is estimated using unlabeled test samples and labeled training samples.
Abstract: A maximum likelihood estimation technique is used for the analysis of agricultural remote sensor data. The m-class probability of misclassification is estimated using unlabeled test samples and labeled training samples. A bound on the variance of a proposed unbiased estimator of the m-class probability of error is derived. The particular case in which each class density is assumed to be a mixture of multivariate normal densities is considered. The extension of spectral signatures in space and time is discussed.

4 citations


Journal ArticleDOI
TL;DR: A computing technique is described for the maximum likelihood estimation of multiple logistic coefficients that makes use of the marginal iterative proportional fitting system commonly applied in the analysis of multidimensional contingency tables.

3 citations


Journal ArticleDOI
TL;DR: In this article, the maximum entropy and maximum likelihood methods were tested with geomagnetic data and the spectral estimates were compared with those obtained by conventional methods of spectral analysis with respect to geomagnetism.
Abstract: The maximum entropy and maximum likelihood methods are tested with geomagnetic data and the spectral estimates are compared with those obtained by conventional methods of spectral analysis.

2 citations


ReportDOI
TL;DR: In this article, the consistency and asymptotic normality of the maximum likelihood estimator in the general nonlinear simultaneous equation model were proved, and it was shown that the proof depends on the assumption of normality unlike in the linear SEMS model, however, the latter has the advantage of being consistent even when the normality assumption is removed.
Abstract: The consistency and the asymptotic normality of the maximum likelihood estimator in the general nonlinear simultaneous equation model are proved. It is shown that the proof depends on the assumption of normality unlike in the linear simultaneous equation model. It is proved that the maximum likelihood estimator is asymptotically more efficient than the nonlinear three-stage least squares estimator if the specification is correct, However, the latter has the advantage of being consistent even when the normality assumption is removed. Hausrnan' s instrumental-variable-interpretation of the maximum likelihood estimator is extended to the general nonlinear simultaneous equation model.

Journal ArticleDOI
TL;DR: In this paper, an extension of the modified Muller method for solving the maximum likelihood equations for the Poisson-Pascal distribution is proposed, which is stable and convergence is guaranteed if certain conditions are satisfied.
Abstract: An extension of the ‘Modified Muller’ method for solving the maximum likelihood equations for the Poisson-Pascal distribution is proposed. This method, although sometimes slower than a three dimensional Newton-Raphson approach, is stable and convergence is guaranteed if certain conditions are satisfied.