scispace - formally typeset
Search or ask a question

Showing papers on "Mixture model published in 1983"


Journal ArticleDOI
TL;DR: In this article, lower bounds for estimation of the parameters of models with both parametric and nonparametric components are given in the form of representation theorems (for regular estimates) and asymptotic minimax bounds.
Abstract: Asymptotic lower bounds for estimation of the parameters of models with both parametric and nonparametric components are given in the form of representation theorems (for regular estimates) and asymptotic minimax bounds. The methods used involve: (i) the notion of a "Hellinger-differentiable (root-) density", where part of the differentiation is with respect to the nonparametric part of the model, to obtain appropriate scores; and (ii) calculation of the "effective score" for the real or vector (finite-dimensional) parameter of interest as that component of the score function orthogonal to all nuisance parameter "scores" (perhaps infinite-dimensional). The resulting asymptotic information for estimation of the parametric component of the model is just (4 times) the squared $L^2$-norm of the "effective score". A corollary of these results is a simple necessary condition for "adaptive estimation": adaptation is possible only if the scores for the parameter of interest are orthogonal to the scores for the nuisance function or nonparametric part of the model. Examples considered include the one-sample location model with and without symmetry, mixture models, the two-sample shift model, and Cox's proportional hazards model.

406 citations


Journal ArticleDOI
TL;DR: In this paper, the problem of image segmentation is considered in the context of a mixture of probability distributions, where segments fall into classes and a probability distribution is associated with each class of segment.
Abstract: The problem of image segmentation is considered in the context of a mixture of probability distributions. The segments fall into classes. A probability distribution is associated with each class of segment. Parametric families of distributions are considered, a set of parameter values being associated with each class. With each observation is associated an unobservable label, indicating from which class the observation arose. Segmentation algorithms are obtained by applying a method of iterated maximum likelihood to the resulting likelihood function. A numerical example is given. Choice of the number of classes, using Akaike's information criterion (AIC) for model identification, is illustrated.

89 citations


16 Jun 1983
TL;DR: In this paper, the problem of clustering individuals is considered within the context of a multivariate normal mixture using model-selection criteria, and two well known model selection criteria, namely Akike's Information Criterion (AIC) and Schwarz's Criterion are proposed for the first time as two new approaches to the problem that what the appropriate choice of k in the mixture multinormal model should be.
Abstract: : The problem of clustering individuals is considered within the context of a multivariate normal mixture using model-selection criteria. Often, the number K of components in the mixture is not known. In practical problems, the question arises as to the appropriate choice of k. The problem is to decide how many components are in the mixture, a difficult multiple decision problem. What the null distribution of the criterion is if the data acutally contain k clusters is not known, and remains largely unresolved still. Two well known model-selection criteria, namely Akike's Information Criterion (AIC) and Schwarz's Criterion are proposed for the first time as two new approaches to the problem of what the appropriate choice of k in the mixture multinormal model should be. The forms of these two model-selection criteria are obtained in the standard multivariate normal mixture model. The results are obtained when data initially partitioned into equal size groups; when data initially reordered; when data initialized by k-means algorithm; when data initialized by special initialization scheme; and when special initialization scheme is used on reordered data.

43 citations


Journal ArticleDOI
TL;DR: A simple alternative model has been developed especially for the prediction of the intensity of equiratio mixtures and it is shown that a psychophysical equirio mixture function can be constructed in the same way as a power function for a single compound.
Abstract: Psychophysical taste mixture models describe the relationship between the perceived intensities of the unmixed components and the intensity of the mixture. Three of these models are discussed. As all of these appear either to be internally inconsistent or lack sufficient generality, a simple alternative model has been developed especially for the prediction of the intensity of equiratio mixtures. This model was experimentally tested with glucose—fructose mixtures. On the basis of the data obtained it is shown that a psychophysical equiratio mixture function can be constructed in the same way as a power function for a single compound. The results show that the new mixture model can predict the functions for equiratio mixtures with great precision. Implications for mixture interaction phenomena are discussed.

37 citations


Journal ArticleDOI
TL;DR: A new algorithm for obtaining extreme vertices designs for linear mixture models is proposed and generally produces designs that are as efficient as those produced by the XVERT algorithm of Snee and Marquardt (1974) but with less computational effort.
Abstract: A new algorithm for obtaining extreme vertices designs for linear mixture models is proposed. The algorithm generally produces designs that are as efficient as those produced by the XVERT algorithm of Snee and Marquardt (1974) but with less computational effort. Use of the algorithm in obtaining designs is also described.

23 citations


01 Jun 1983
TL;DR: In this paper, the estimation of mixing proportions in the mixture model is discussed with emphasis on the mixture of two normal components with all five parameters unknown, and simulations are presented which compare minimum distance (MD) and maximum likelihood (ML) estimation of the parameters of this mixture-of-normals model.
Abstract: : The estimation of mixing proportions in the mixture model is discussed with emphasis on the mixture of two normal components with all five parameters unknown Simulations are presented which compare minimum distance (MD) and maximum likelihood (ML) estimation of the parameters of this mixture-of-normals model Some practical issues of implementation of these results are also discussed Simulation results indicate that ML techniques are superior to MD when component distributions actually are normal, while MD techniques provide better estimates than ML under symmetric departures from component normality Results are presented which establish strong consistency and asymptotic normality of the MD estimator under conditions which include the mixture-of-normals model Asymptotic variances and relative efficiencies are obtained for further comparison of the MDE and MLE (Author)

2 citations