scispace - formally typeset
Search or ask a question

Showing papers on "Expectation–maximization algorithm published in 1976"


Journal ArticleDOI
TL;DR: This algorithm is shown to yield an image which is unbiased, which has the minimum variance of any estimator using the same measurements, and which will perform better than any current reconstruction technique, where the performance measures are the bias and viariance.
Abstract: The stochastic nature of the measurements used for image reconstruction from projections has largely been ignored in the past. If taken into account, the stochastic nature has been used to calculate the performance of algorithms which were developed independent of probabilistic considerations. This paper utilizes the knowledge of the probability density function of the measurements from the outset, and derives a reconstruction scheme which is optimal in the maximum likelihood sense. This algorithm is shown to yield an image which is unbiased -- that is, on the average it equals the object being reconstructed -- and which has the minimum variance of any estimator using the same measurements. As such, when operated in a stochastic environment, it will perform better than any current reconstruction technique, where the performance measures are the bias and viariance.

224 citations


Journal ArticleDOI
TL;DR: In this paper, it was shown that for some of these models maximum likelihood estimates always exist and for some others they exist provided certain degeneracies in the data do not occur, and that if it does exist it is unique.
Abstract: SUMMARY Generalized linear models were defined by Nelder & Wedderburn (1972) and include a large class of useful statistical models. It is shown that for some of these models maximum likelihood estimates always exist and that for some others they exist provided certain degeneracies in the data do not occur. Similar results are obtained for uniqueness of maximum likelihood estimates and for their being confined to the interior of the parameter space. For instance, with the familiar model of probit analysis it is shown that under certain conditions the maximum likelihood estimate exists, and that if it does exist it is unique. The models considered also include those involving the normal, Poisson and gamma distributions with power and logarithmic transformations to linearity.

206 citations


Journal ArticleDOI
TL;DR: In this paper, the role of martingale limit theory in the theory of maximum likelihood estimation for continuous-time stochastic processes is investigated and analogues of classical statistical concepts and quantities are also suggested.
Abstract: This paper is mainly concerned with the asymptotic theory of maximum likelihood estimation for continuous-time stochastic processes. The role of martingale limit theory in this theory is developed. Some analogues of classical statistical concepts and quantities are also suggested. Various examples that illustrate parts of the theory are worked through, producing new results in some cases. The role of diffusion approximations in estimation is also explored. MAXIMUM LIKELIHOOD ESTIMATION; CONTINUOUS-TIME STOCHASTIC PROCESSES; ASYMPTOTIC THEORY; MARTINGALE LIMIT THEORY; DIFFUSION APPROXIMATIONS

139 citations


Journal ArticleDOI
TL;DR: In this article, a probabilistic model for the validation of behavioral hierarchies is presented, which is by means of iterative convergence to maximum likelihood estimates, and two approaches to assess the fit of the model to sample data are discussed.
Abstract: A probabilistic model for the validation of behavioral hierarchies is presented. Estimation is by means of iterative convergence to maximum likelihood estimates, and two approaches to assessing the fit of the model to sample data are discussed. The relation of this general probabilistic model to other more restricted models which have been presented previously is explored and three cases of the general model are applied to exemplary data.

121 citations


Journal ArticleDOI
TL;DR: In this article, a simple iterative method of solution is proposed and studied, and it is shown that the sequence of iterates converges to a relative maximum of the likelihood function, and that the convergence is geometric with a factor of convergence which for large samples equals the maxi-mal relative loss of Fisher information due to the incompleteness of data.
Abstract: The paper deals with the numerical solution of the likelihood equations for incomplete data from exponential families, that is for data being a function of exponential family data. Illustrative examples especially studied in this paper concern grouped and censored normal samples and normal mixtures. A simple iterative method of solution is proposed and studied. It is shown that the sequence of iterates converges to a relative maximum of the likelihood function, and that the convergence is geometric with a factor of convergence which for large samples equals the maxi-mal relative loss of Fisher information due to the incompleteness of data. This large sample factor of convergence is illustrated diagrammaticaily for the examples mentioned above. Experiences of practical application are mentioned.

102 citations


Journal ArticleDOI
TL;DR: In this article, a (k + 1)-parameter version of the k-dimensional multivariate exponential distribution (MVE) of Marshall and Olkin is investigated and a simple estimator (INT) is derived as the first iterate in solving the likelihood equations iteratively.
Abstract: Parameter estimation for a (k + 1)-parameter version of the k-dimensional multivariate exponential distribution (MVE) of Marshall and Olkin is investigated. Although not absolutely continuous with respect to Lebesgue measure, a density with respect to a dominating measure is specified, enabling derivation of a likelihood function and likelihood equations. In general, the likelihood equations, not solvable explicitly, have a unique root which is the maximum likelihood estimator (MLE). A simple estimator (INT) is derived as the first iterate in solving the likelihood equations iteratively. The resulting sequence of estimators converges to the MLE for sufficiently large samples. These results can be extended to the more general (2 k − 1)-parameter MVE.

73 citations


Journal ArticleDOI
TL;DR: In this paper, a method for obtaining exact maximum likelihood estimates of the two-parameter gamma distribution scale and shape parameters is presented. But this method is based on a likelihood ratio test based on the gamma distribution for investigating possible treatment-induced scale differences.
Abstract: Rapidly converging iterative procedures for obtaining exact maximum likelihood estimates of the two-parameter gamma distribution scale and shape parameters are presented. These procedures yield estimates of parameters associated with a likelihood ratio test based on the two-parameter gamma distribution for investigating possible treatment-induced scale differences.

27 citations



Journal ArticleDOI
TL;DR: In this article, it is argued that finite populations do fall within the scope of this axiom, and that the likelihood function, when properly defined and interpreted, can play the same fundamental role in finite population theory as it does elsewhere in statistical inference.
Abstract: SUMMARY Both conventional randomization models and prediction, i.e. superpopulation, models for finite populations are discussed from the viewpoint of the likelihood axiom. It is argued that finite populations do fall within the scope of this axiom, and that the likelihood function, when properly defined and interpreted, can play the same fundamental role in finite population theory as it does elsewhere in statistical inference. Under a multivariate normal regression model, the relationship between the likelihood function for the population total and the probability distribution of the minimum variance unbiased estimator is studied. The anticipated result, that this estimator maximizes the likelihood function, is shown to hold under the most familiar models, but an example shows it to be false in general. The role of balanced samples in providing robust inferences is discussed briefly. Conditions are described under which the likelihood function is unchanged by the addition of a new regressor to the model.

20 citations


Journal ArticleDOI
TL;DR: In this paper, a method of centres algorithm for maximum likelihood estimation in the three-parameter lognormal model is presented and discussed, the algorithm is a member of the class of moving truncations algorithms for solving nonlinear programming problems and is able to move the numerical search out of the region of the infinite maximum of the conditional likelihood function.
Abstract: A method of centres algorithm for maximum likelihood estimation in the three-parameter lognormal model is presented and discussed, The algorithm is a member of the class of moving truncations algorithms for solving nonlinear programming problems and is able to move the numerical search out of the region of the infinite maximum of the conditional likelihood function, thereby permitting convergence to an interior relative maximum of this function. The algorithm also includes an optimality test to locate the primary relative maximum of the likelihood function.

17 citations



Journal ArticleDOI
TL;DR: In this paper, the authors investigated the asymptotic properties of the maximum likelihood estimate (MLE) of parameters of a stochastic process, which is related to martingale limit theory by recognizing the (known) fact that, under certain regularity conditions, the derivative of the logarithm of the likelihood function is a Martingale.
Abstract: This thesis is primarily concerned with the investigation of asymptotic properties of the maximum likelihood estimate (MLE) of parameters of a stochastic process. These asymptotic properties are related to martingale limit theory by recognizing the (known) fact that, under certain regularity conditions, the derivative of the logarithm of the likelihood function is a martingale. To this end, part of the thesis is devoted to using or developing martingale limit theory to provide conditions for the consistency and/or asymptotic normality of the MLE. Thus, Chapter 1 is concerned with the martingale limit theory, while the remaining chapters look at its application to three broad types of stochastic processes. Chapter 2 extends the classical development of asymptotic theory of MLE's (a la Cramer [/]) to stochastic processes which, basically, behave in a non-explosive way and for which non-random norming sequences can be used. In this chapter we also introduce a generalization of Fisher's measure of information to the stochastic process situation. Chapter 3 deals with the theory for general processes develops the notion of \"conditional\" exponential families of processes, as well as establishing the importance of using random norming sequences. In Chapter it we consider the asymptotic theory of maximum likelihood estimation for continuous time processes and establish results which are analogous to those for discrete time processes. In each of these chapters many applications are considered in an attempt to show how known and new results fit into the general

Journal ArticleDOI
TL;DR: In this paper, an efficient Cholesky type algorithm for the W transformation was proposed. But the algorithm requires the matrix W to be computed at each iteration, which is computationally expensive.
Abstract: The W transformation greatly reduces the computational bruden in obtaining maximum likelihood estimates for the mixed A.O.V. model. However, effective optimization methods for maximizing the likelihood must comptlte the matrix W at each iteration. This paper develops an efficient Cholesky type algorithm for forming W.

01 Jul 1976
TL;DR: In this paper, the estimation procedure is derived from the maximum likelihood approach and is based on Newton-Raphson techniques applied to the likelihood equations, with only generalized least squares estimation in the second step.
Abstract: : A method is presented for the estimation of the parameters in the vector autoregressive moving average time series model. The estimation procedure is derived from the maximum likelihood approach and is based on Newton-Raphson techniques applied to the likelihood equations. The resulting two-step Newton-Raphson procedure is computationally simple, involving only generalized least squares estimation in the second step. This Newton-Raphson estimator is shown to be asymptotically efficient and to possess a limiting multivariate normal distribution. (Author)

Journal ArticleDOI
TL;DR: In this article, an iterative procedure for calculating the maximum likelihood estimates of the unknown parameters is presented to help its application in biological research, as well as comparisons between the linear learning model and the Markov model.
Abstract: An extension of the Markov model with only a few additional parameters is the linear learning model, as introduced by Bush & Mosteller (1955). To help its application in biological research an iterative procedure for calculating the maximum likelihood estimates of the unknown parameters is presented. Numerical examples, some of which have already been treated by Bush & Mosteller, are included, as well as comparisons between the linear learning model and the Markov model.


Journal ArticleDOI
Bo Leden1
TL;DR: In this paper, a model of a one-dimensional heat diffusion process is determined using the maximum likelihood method using a linear, infinite-dimensional system, where the gain factors of the modal expansion of the transfer function of the process are identified as a single term.
Abstract: Parametric models of a one-dimensional heat diffusion process are determined using the maximum likelihood method. The process is a linear, infinite dimensional system. Statistical teats indicate that the appropriate orders of the models obtained are relatively low. It is found empirically that successive terms in the modal expansion of the transfer function of the process, having gain factors of the same sign, are identified as a single term.