scispace - formally typeset
Search or ask a question

Showing papers on "Mixture model published in 1991"


Journal ArticleDOI
TL;DR: In this paper, the authors show that the finite-mixture model for J dichotomous items having T latent classes gives the same estimates of item parameters as conditional likelihood on a set whose probability approaches one if T ≥ (J + 1)/2.
Abstract: The Rasch model for item analysis is an important member of the class of exponential response models in which the number of nuisance parameters increases with the number of subjects, leading to the failure of the usual likelihood methodology. Both conditional-likelihood methods and mixture-model techniques have been used to circumvent these problems. In this article, we show that these seemingly unrelated analyses are in fact closely linked to each other, despite dramatic structural differences between the classes of models implied by each approach. We show that the finite-mixture model for J dichotomous items having T latent classes gives the same estimates of item parameters as conditional likelihood on a set whose probability approaches one if T ≥ (J + 1)/2. Unconditional maximum likelihood estimators for the finite-mixture model can be viewed as Keifer-Wolfowitz estimators for the random-effects version of the Rasch model. Latent-class versions of the model are especially attractive when T is...

279 citations


Journal ArticleDOI
TL;DR: A model for a time series of epileptic seizure counts in which the mean of a Poisson distribution changes according to an underlying two-state Markov chain is discussed.
Abstract: This paper discusses a model for a time series of epileptic seizure counts in which the mean of a Poisson distribution changes according to an underlying two-state Markov chain. The EM algorithm (Dempster, Laird, and Rubin, 1977, Journal of the Royal Statistical Society, Series B 39, 1-38) is used to compute maximum likelihood estimators for the parameters of this two-state mixture model and extensions are made allowing for nonstationarity. The model is illustrated using daily seizure counts for patients with intractable epilepsy and results are compared with a simple Poisson distribution and Poisson regressions. Some simulation results are also presented to demonstrate the feasibility of this model.

110 citations


Journal ArticleDOI
TL;DR: A new multinomial maximum likelihood mixture (MMLM) analysis is discussed for estimating the mixing probabilities alpha j and the basis distributions fj(x) of a hypothesized mixture distribution and generates a maximum likelihood goodness-of-fit statistic for testing various mixture hypotheses.
Abstract: Mixture distributions are formed from a weighted linear combination of 2 or more underlying basis distributions [g(x) = sigma j alpha j fj(x); sigma alpha j = 1]. They arise frequently in stochastic models of perception, cognition, and action in which a finite number of discrete internal states are entered probabilistically over a series of trials. This article reviews various distributional properties that have been examined to test for the presence of mixture distributions. A new multinomial maximum likelihood mixture (MMLM) analysis is discussed for estimating the mixing probabilities alpha j and the basis distributions fj(x) of a hypothesized mixture distribution. The analysis also generates a maximum likelihood goodness-of-fit statistic for testing various mixture hypotheses. Stochastic computer simulations characterize the statistical power of such tests under representative conditions. Two empirical studies of mental processes hypothesized to involve mixture distributions are summarized to illustrate applications of the MMLM analysis.

92 citations


Journal ArticleDOI
TL;DR: The mixture model predicts that using both algorithm and retrieval on a single trial will be slower than using the algorithm alone, whereas the race model predicts the reverse.
Abstract: Two memory-based theories of automaticity were compared. The mixture model and the race model both describe automatization as a transition from algorithmic processing to memory retrieval. The mixture model predicts that, with training, the variability of reaction time will initially increase, and later decrease in a concave downward manner, whereas the race model predicts the variability will decrease only in a concave upward manner. The mixture model predicts that using both algorithm and retrieval on a single trial will be slower than using the algorithm alone, whereas the race model predicts the reverse. The experiments used an alphabet arithmetic task, in which subjects verified equations of the form H + 3 = K and made subjective reports of their strategies on individual trials. Both the variability of reaction times and the pattern of reaction times associated with the strategy reports supported the race model.

79 citations


PatentDOI
TL;DR: In this article, a method for preprocessing noisy speech to minimize the likelihood of error in estimation for use in a recognizer is presented, which is based on the Minimum Mean-Mean-Log-Spectral Distance (MMLSD) estimator.
Abstract: A method is disclosed for use in preprocessing noisy speech to minimize likelihood of error in estimation for use in a recognizer. The computationally-feasible technique, herein called Minimum-Mean-Log-Spectral-Distance (MMLSD) estimation using mixture models and Marlov models, comprises the steps of calculating for each vector of speech in the presence of noise corresponding to a single time frame, an estimate of clean speech, where the basic assumptions of the method of the estimator are that the probability distribution of clean speech can be modeled by a mixture of components each representing a different speech class assuming different frequency channels are uncorrelated within each class and that noise at different frequency channels is uncorrelated. In a further embodiment of the invention, the method comprises the steps of calculating for each sequence of vectors of speech in the presence of noise corresponding to a sequence of time frames, an estimate of clean speech, where the basic assumptions of the method of the estimator are that the probability distribution of clean speech can be modeled by a Markov process assuming different frequency channels are uncorrelated within each state of the Markov process and that noise at different frequency channels is uncorrelated.

72 citations


Journal ArticleDOI
TL;DR: A method of performing pattern recognition (discrimination and classification) using a recursive technique derived from mixture models, kernel estimation and stochastic approximation is developed.

42 citations


Journal ArticleDOI
TL;DR: A recursive algorithm is proposed for estimation of parameters in mixture models, where the observations are governed by a hidden Markov chain, and the often badly conditioned information matrix is estimated, and its inverse is incorporated into the algorithm.
Abstract: A recursive algorithm is proposed for estimation of parameters in mixture models, where the observations are governed by a hidden Markov chain. The often badly conditioned information matrix is estimated, and its inverse is incorporated into the algorithm. The performance of the algorithm is studied by simulations of a symmetric normal mixture. The algorithm seems to be stable and produce approximately normally distributed estimates, provided the adaptive matrix is kept well conditioned. Some numerical examples are included. >

39 citations


Journal ArticleDOI
TL;DR: A mixture model is described for dose-response studies where measurements on a continuous variable suggest that some animals are not affected by treatment and maximum likelihood estimation via the EM algorithm is described.
Abstract: A mixture model is described for dose-response studies where measurements on a continuous variable suggest that some animals are not affected by treatment. The model combines a logistic regression on dose for the probability an animal will "respond" to treatment with a linear regression on dose for the mean of the responders. Maximum likelihood estimation via the EM algorithm is described and likelihood ratio tests are used to distinguish between the full model and meaningful reduced-parameter versions. Use of the model is illustrated with three real-data examples.

32 citations


Journal ArticleDOI
TL;DR: A review of the analysis methods that are available, with emphasis on methods for estimating the parameters that describe the underlying stellar populations and for determining the number of discrete stellar populations is presented in this paper.
Abstract: A unified approach to the analysis of stellar populations through the application of finite mixture models is presented, and the statistical properties of univariate finite mixture models are examined. A review is presented of the analysis methods that are available, with emphasis on methods for estimating the parameters that describe the underlying stellar populations and for determining the number of discrete stellar populations. Attention is restricted to five variables: U, V, W, Fe/H, and age. Parameter and error estimation is demonstrated in two simulation experiments designed to assess the detectability of a thick disk in samples of solar neighborhood stars.

22 citations


Proceedings ArticleDOI
13 Oct 1991
TL;DR: The author describes a classification approach and associated algorithms designed for use with continuous but non-Gaussian data modeled as a mixture of Gaussian distributions.
Abstract: The author describes a classification approach and associated algorithms designed for use with continuous but non-Gaussian data. The probability density function for each class is modeled as a mixture of Gaussian distributions. The clustering algorithm estimates the means the covariances of the component Gaussian distributions for each class. A classification rule based on the mixture model is presented. >

10 citations


01 Sep 1991
TL;DR: A scheme of speaker-independent isolated word recognition in which Hidden Markov Modelling is used with Vector Quantization codebooks constructed using the Expectation Maximization (EM) algorithm for Gaussian mixture models results in greater recognition accuracy.
Abstract: This paper presents a scheme of speaker-independent isolated word recognition in which Hidden Markov Modelling is used with Vector Quantization codebooks constructed using the Expectation Maximization (EM) algorithm for Gaussian mixture models. In comparison with conventional vect or quantization, the EM algorithm results in greater recognition accuracy.

Journal ArticleDOI
TL;DR: In this paper, an asymptotic mixture theory of fiber-reinforced composites with periodic microstructure is presented for rate-independent inelastic responses, such as elastoplastic deformation.
Abstract: : An asymptotic mixture theory of fiber-reinforced composites with periodic microstructure is presented for rate-independent inelastic responses, such as elastoplastic deformation. Key elements are the modeling capability of simulation critical interaction across material interfaces and the inclusion of the kinetic energy of microdisplacement. The construction of the proposed mixture model, which is deterministic, instead of phenomenological, is accomplished by resorting to a variational approach. The principle of virtual work is used for total quantities to derive mixture equations of motion and boundary conditions, while Reissner's mixed variational principle (1984, 1986), applied to the incremental boundary value problem yields consistent mixture constitutive relations. In order to assess the model accuracy, numerical experiments were conducted for static and dynamic loads. The prediction of the model in the time domain was obtained by an explicit finite element code. DYNA2D is used to furnish numerically exact data for the problems by discretizing the details of the microstructure. On the other hand, the model capability of predicting effective tangent moduli was tested by comparing results with NIKE2D. In all cases, good agreement was observed between the predicted and exact data for plastic, as well as, elastic responses. Keywords: Fiber reinforced composites; Mixture theory; Structural properties. (kt)

Journal ArticleDOI
TL;DR: In this paper, a technique for fitting mixture distributions to discrete and continuous instrument count data is presented, and the use of the EM algorithm for performing maximum likelihood estimation is introduced and advantages over other methods are discussed.
Abstract: A technique for fitting mixture distributions to discrete and continuous instrument count data is presented. The use of the EM algorithm for performing maximum likelihood estimation is introduced and advantages over other methods are discussed. Equations are presented for fitting mixtures of Poisson distributions and mixtures of normal distributions. Examples of the fitting of these two types of mixture distribution are given, and it is shown how standard errors of the parameter estimates can be obtained from within the framework of the fitting process.

01 Feb 1991
TL;DR: In this paper, the authors show that the bootstrap confidence region based on the likelihood ratio statistic is superior to the traditional likelihood based confidence region in all three cases: regular case, simple boundary case, and non-identifiable, boundary case.
Abstract: Statistical inference on the likelihood ratio statistic for the number of components in a mixture model is complicated when the true number of components is less than that of the proposed model since this represents an non-regular problem: the true parameter is on the boundary of the parameter space and in some cases the true parameter is in a nonidentifiable subset of the parameter space. Bootstrap confidence regions based on the likelihood ratio statistic are shown by analysis and Monte Carlo simulation to be superior to the traditional likelihood based confidence region in all three cases: regular case, simple boundary case, and nonidentifiable, boundary case.