scispace - formally typeset
Search or ask a question

Showing papers on "Mixture model published in 1989"


Journal ArticleDOI
D. N. Geary1
TL;DR: 17. Mixture Models: Inference and Applications to Clustering (Statistics: Textbooks and Monographs Series, Vol. 84).
Abstract: 17. Mixture Models: Inference and Applications to Clustering (Statistics: Textbooks and Monographs Series, Vol. 84). By G. J. McLachlan and K. E. Basford. Dekker, New York, 1988. xii + 254 pp. $83.50.

624 citations



Journal ArticleDOI
TL;DR: In this article, the authors use moment matrices and their determinants to elucidate the structure of mixture estimation as carried out using the method of moments and derive an asymptotically normal statistic for testing the true number of points in the mixing distribution.
Abstract: The use of moment matrices and their determinants are shown to elucidate the structure of mixture estimation as carried out using the method of moments. The setting is the estimation of a discrete finite support point mixing distribution. In the important class of quadratic variance exponential families it is shown for any sample there is an integer $\hat{ u}$ depending on the data which represents the maximal number of support points that one can put in the estimated mixing distribution. From this analysis one can derive an asymptotically normal statistic for testing the true number of points in the mixing distribution. In addition, one can construct consistent nonparametric estimates of the mixing distribution for the case when the number of points is unknown or even infinite. The normal model is then examined in more detail, and in particular the case when $\sigma^2$ is unknown is given a comprehensive solution. It is shown how to estimate the parameters in a direct way for every hypothesized number of support points in the mixing distribution, and it is shown how the structure of the problem yields a decomposition of variance into model and error components very similar to the traditional analysis of variance.

124 citations


14 Dec 1989
TL;DR: In this article, the authors developed a tracking filter based on the assumption that the number of mixture components should be minimized without modifying the "structure" of the distribution beyond a specified limit.
Abstract: The paper is concerned with the development of practical filters for tracking a target when the origin of sensor measurements is uncertain. The full Bayesian solution to this problem gives rise to mixture distributions. From knowledge of the mixture distribution, in principle, an optimal estimate of the state vector for any criteria may be obtained. Also, if the problem is linear and Gaussian, the distribution becomes a Gaussian mixture in which each component probability density function is given by a Kalman filter. The author only considers this case. The methods presented are based on the premise that the number of mixture components should be minimized without modifying the 'structure' of the distribution beyond a specified limit. The techniques operate by merging similar components in such a way that the approximation preserves the mean and covariance of the original mixture. Also to allow the tracking filter to be implemented as a bank of Kalman filters, it is required that the approximated distribution is itself a Gaussian mixture.

109 citations


Journal ArticleDOI
TL;DR: This study illustrates the merit of the simple mixture model in adaptive processing for signal detection purposes by showing that the adaptive receiver performs better than the linear one which, in turn, performs slightly better thanThe robust correlator-limiter.
Abstract: Three receivers are compared for the detection of a known signal in additive ambient underwater noise of seagoing merchant vessels. These receivers are: the matched filter, which is the classical linear receiver based on a Gaussian assumption; the correlation-limiter, which is the Neyman-Pearson minimax robust receiver when the noise uncertainty is modeled as a mixture process with a Gaussian nominal; and the Gaussian-Gaussian mixture likelihood ratio receiver. This last receiver is adaptive in the sense that it is based on a parametric model whose parameters are computed from the actual data. The principal results of this study are that, in terms of the receiving operating curves, the adaptive receiver performs better than the linear one which, in turn, performs slightly better than the robust correlator-limiter. This study illustrates, for one particular noise sample, the merit of the simple mixture model in adaptive processing for signal detection purposes. >

93 citations


Journal ArticleDOI
David J. Hand1
TL;DR: Mixture Models: Inference and Applications to Clustering, 1988, by G. McLachlan and K. Basford.
Abstract: Mixture Models: Inference and Applications to Clustering. By G. J. McLachlan and K. E. Basford. ISBN 0 8247 7691 7. Dekker, New York, 1988. xii + 254 pp. $83.50.

78 citations


Journal ArticleDOI
TL;DR: In this article, it was shown that the Kullback-Leibler distance between the mixture posterior and that of a single normal distribution is minimized when the mean and variance of the single normal distributions are chosen to be the mean or variance of a mixture posterior.
Abstract: Several authors have discussed Kalman filtering procedures using a mixture of normals as a model for the distributions of the noise in the observation and/or the state space equations. Under this model, resulting posteriors involve a mixture of normal distributions, and a “collapsing method” must be found in order to keep the recursive procedure simple. We prove that the Kullback-Leibler distance between the mixture posterior and that of a single normal distribution is minimized when we choose the mean and variance of the single normal distribution to be the mean and variance of the mixture posterior. Hence, “collapsing by moments” is optimal in this sense. We then develop the resulting optimal algorithm for “Kalman filtering” for this situation, and illustrate its performance with an example.

21 citations


Journal ArticleDOI
TL;DR: Waxman et al. as mentioned in this paper modeled individual differences in children's performance on a classification task by a two component binomial mixture distribution and provided Parameter estimates for the model are provided and the data are fit to the model.

14 citations


Journal ArticleDOI
TL;DR: In this paper, a minimum distance procedure, analogous to maximum likelihood for multinomial data, is employed to fit mixture models to mass-size relative frequencies recorded for some clay soils of southeastern Australia.
Abstract: A minimum distance procedure, analogous to maximum likelihood for multinomial data, is employed to fit mixture models to mass-size relative frequencies recorded for some clay soils of southeastern Australia. Log hyperbolic component distributions are considered initially and it is shown how they can be fitted satisfactorily at least to ungrouped data using a generalized EM algorithm. A computationally more convenient model with log skew Laplace components is subsequently shown to suffice. It is demonstrated how it can be fitted to the data in their original grouped form. Consideration is given also to the provision of standard errors using the idea of a quasi-sample size.

13 citations


Book
18 Apr 1989
TL;DR: In this paper, the authors present a Validity Generalization Random Model (VGMRM) for R. The model is based on the classical test theory and is used for validation generalization.
Abstract: 1. Introduction.- 1.1 Motivation and Background.- 1.2 Conceptual Problems of Validity Generalization.- 1.3 An Alternative Formulation.- 2. A Validity Generalization Model.- 2.1 Introduction.- 2.2 Classical Test Theory.- 2.3 A Validity Generalization Random Model.- 2.4 The Joint Space of P and E.- 2.5 P and E Are Dependent Variables.- 2.6 The Distribution of R, c(r).- 2.7 Are P and E Correlated?.- 2.8 Identifiability.- 3. Estimation.- 3.1 Introduction.- 3.2 Lehmann's Classification.- 3.3 Sample Data.- 3.4 The Basic Sample Estimates.- 3.5 The Estimator SP2 = SR2 ? S*2.- 3.6 Interpreting Estimators.- 3.7 The Expectation of SR2.- 3.8 The Expectation of S*2.- 3.9 The Expectation of SP2.- 3.10 Numerical Evaluation of ?(SP2).- 3.11 The Distribution of Sp and the Power Problem.- 3.12 The Consistency of SP2.- 3.13 The Limiting Behavior of SR2.- 3.14 Multifactor Estimation Procedures.- 3.15 A Representative Multifactor Estimator.- 3.16 A Comment on Z Transformations.- 4. Summary and Discussion of Validity Generalization.- 4.1 Summary of Model Properties.- 4.2 Validity Generalization and Classical Test Theory.- 4.3 Summary of Estimation Procedures.- 4.4 Consistency and Identifiability.- 4.5 The Bayesian Connection.- 4.6 Computer Simulation Studies.- 5. A Conditional Mixture Model for Correlation Coefficients.- 5.1 Introduction.- 5.2 Finite Mixture Distributions.- 5.3 A Modeling Distribution for R.- 5.4 A Mixture Model Distribution for R.- 5.5 A Parent Distribution for Histograms of R.- 5.6 Comment.- 6. Parameter Estimation.- 6.1 Introduction.- 6.2 Estimation Equations.- 7. Examples and Applications.- 7.1 Introduction.- 7.2 Artificial Data, Example 1.- 7.3 How Many Components in the Mixture?.- 7.4 Electrical Workers, Example 2.- 7.5 Army Jobs, Example 3.- 7.6 Army Jobs, Example 4.- 7.7 College Grades, Example 5.- 7.8 Law School Test Scores and Grades, Example 6.- 8. Artifact Corrections and Model Assumptions.- 8.1 Artifact Corrections.- 8.2 Identifiability of Mixtures.- 8.3 Failure of Model Assumptions.- 8.4 Properties of the Maximum Likelihood Estimates.- 8.5 Miscellaneous Comments.- Notes.- References.- Author Index.

12 citations




Book ChapterDOI
01 Jan 1989
TL;DR: In this article, a two component mixture model was fitted to grouped, truncated data using the EM algorithm when analysing the volume of red blood cells, which is an obvious application of some form of mixture distribution; see, for example, Ashford et al.
Abstract: Mixture models can arise in a variety of situations. For example, in [3], a two component mixture model was fitted to grouped, truncated data using the EM algorithm when analysing the volume of red blood cells. Aitkin, [1], has considered the analysis of mixture distributions using the EM algorithm in GLIM. The motivation for the present work is the analysis of birthweight, which various studies have analysed by assuming a predominantly Normal distributions but with additional births in the tail - an obvious application of some form of mixture distribution; see, for example Ashford et al.,[5].

Journal ArticleDOI
TL;DR: In this paper, the acceptance-rejection algorithm is slow to simulate from mixture distributions when the acceptance probability is small, and reformulation as a mixture of general Erlang distributions may increase efficiency.
Abstract: The acceptance—rejection algorithm is slow to simulate from mixture distributions when the acceptance probability is small. For a mixture of exponential distributions, reformulation as a mixture of general Erlang distributions may increase efficiency. The reformulation which maximises acceptance probability can be found by linear programming. An example is given in which reformulation reduces the average simulation time by a factor of 15.

01 Jan 1989
TL;DR: In this article, the Kalman filter is generalized to more complex situations or problems which may be described by models based on mixtures of normal distributions, such as normal mixture models.
Abstract: The present paper illustrates how simple properties of the normal random vectors, which have immediate applications to statistical problems involving normal data, can be generalized to more complex models when these are described by means of normal mixture models. In particular, we shall see how the Kalman filter, which is but a simple consequence of an elementary property of the multivariate normal distribution, is generalized to more complex situations or problems which may be described by models based on mixtures of normal distributions. 1. INTRODUCCION