scispace - formally typeset
Search or ask a question

Showing papers on "Mixture model published in 1990"


Journal ArticleDOI
TL;DR: In this paper, some recent research in the analysis of mixture distributions has been carried out and some of the results have been published in the journal "Vol. 21, No. 4, pp. 619-641".
Abstract: (1990). Some recent research in the analysis of mixture distributions. Statistics: Vol. 21, No. 4, pp. 619-641.

125 citations


Proceedings ArticleDOI
03 Apr 1990
TL;DR: An acoustic-class-dependent technique for text-independent speaker identification on very short utterances is described, based on maximum-likelihood estimation of a Gaussian mixture model representation of speaker identity.
Abstract: An acoustic-class-dependent technique for text-independent speaker identification on very short utterances is described. The technique is based on maximum-likelihood estimation of a Gaussian mixture model representation of speaker identity. Gaussian mixtures are noted for their robustness as a parametric model and their ability to form smooth estimates of rather arbitrary underlying densities. Speaker model parameters are estimated using a special case of the iterative expectation-maximization (EM) algorithm, and a number of techniques are investigated for improving model robustness. The system is evaluated using a 12 reference speaker population from a conversational speech database. It achieves 80% average text-independent speaker identification performance for a 1-s test utterance length. >

122 citations


Proceedings ArticleDOI
03 Apr 1990
TL;DR: A simple method is investigated, to re-estimate the vector quantization codebook without continuous probability density function assumptions, and preliminary experiments show that such reestimation methods are as effective as the semicontinuous model, especially when the continuous probabilitydensity function assumption is inappropriate.
Abstract: The semicontinuous hidden Markov model is used in a 1000-word speaker-independent continuous speech recognition system and compared with the continuous mixture model and the discrete model. When the acoustic parameter is not well modeled by the continuous probability density, it is observed that the model assumption problems may cause the recognition accuracy of the semicontinuous model to be inferior to the discrete model. A simple method based on the semicontinuous model is investigated, to re-estimate the vector quantization codebook without continuous probability density function assumptions. Preliminary experiments show that such reestimation methods are as effective as the semicontinuous model, especially when the continuous probability density function assumption is inappropriate. >

57 citations


Journal ArticleDOI
TL;DR: In this article, the EM algorithm is used to fit a mixture of a specified number of normal distributions to data collected in this manner, where the data are grouped but not truncated.
Abstract: Description and Purpose Data are often collected in the form of frequencies of observations falling in fixed class intervals. A further feature that is often encountered is truncation in the data; observations below and above certain readings are often not available. Subroutine MGT fits a mixture of a specified number of normal distributions to data collected in this manner. The subroutine can also be used where the data are grouped but not truncated. The fitting procedure uses the EM algorithm (Dempster et al., 1977).

40 citations


Journal ArticleDOI
TL;DR: In this article, a mixture model with Laplace and normal components is fitted to wind shear data available in grouped form, and a set of equations is presented for iteratively estimating the parameters of the model using an application of the EM algorithm.
Abstract: A mixture model with Laplace and normal components is fitted to wind shear data available in grouped form. A set of equations is presented for iteratively estimating the parameters of the model using an application of the EM algorithm. Twenty-four sets of data are examined with this technique, and the model is found to give a good fit to the data. Some hypotheses about the parameters in the model are discussed in light of the estimates obtained.

36 citations


Journal ArticleDOI
TL;DR: In this paper, the authors compared various methods for clustering mixed-mode data and found that a method based on a finite mixture model in which the observed categorical variables are generated from underlying continuous variables out-performed more conventional methods when applied to artificially generated data.
Abstract: Various methods for clustering mixed-mode data are compared. It is found that a method based on a finite mixture model in which the observed categorical variables are generated from underlying continuous variables out-performs more conventional methods when applied to artificially generated data. This method also performs best when applied to Fisher's iris data in which two of the variables are categorized by applying thresholds.

35 citations


Journal ArticleDOI
TL;DR: In this article, various mixture models are proposed and fitted to a set of time-to-response quantal assay data, which provide a natural extension to the time-dependent case of the classical threshold model for quantal assays.
Abstract: Various mixture models are proposed and fitted to a set of time-to-response quantal assay data. The models provide a natural extension to the time-dependent case of the classical threshold model for quantal assay data. The approach deals simply with interval censoring, and provides flexible families of distributions, for both the mixing probability and the cumulative distribution function for time to response. For data of the kind considered in this paper, mixture models may provide a more suitable description than other models from survival analysis, such as the proportional hazards model.

27 citations



Journal ArticleDOI
TL;DR: In this paper, the authors used the maximum likelihood method (MLM) and the weighted maximum likelihood (WMLM), both under the sampling schemes suggested by Hosmer (1973), to estimate the 5-dime-nsional vector of parameters (p,μ,λ,α,c) of the mixture of an Inverse Gaussian IG(μ, l) and Weibull W(α, c) distributions with mixing proportion p.
Abstract: The main objective of this work is to estimate the 5-dime-nsional vector of parameters (p,μ,λ,α,c) of the mixture of an Inverse Gaussian IG(μ,λ) and Weibull W(α,c) distributions with mixing proportion p. We use the maximum Likelihood method (MLM) and the weighted maximum likelihood method (WMLM), both under the sampling schemes suggested by Hosmer (1973). Simulation study shows that the WMLM performs best, when Hosmer's model 2 is used, in the sense of minimizing the mean square error.

6 citations


Journal ArticleDOI
TL;DR: In this article, the potential application of the characterization theory to the mixture model is discussed and successful application of this theory can be made in order to choose the appropriate mixture model and the mixing parameter.
Abstract: This paper indicates the potential application of the characterization theory to the mixture model. It discusses a particular example of a mixture model and concludes that successful application of this theory can be made in order to choose the appropriate mixture model and the mixing parameter.

5 citations


Journal ArticleDOI
TL;DR: A new ASI technique and various channel compensation methods will be applied to a telephone database and results will be presented at the conference, allowing for high recognition rates using short utterances in a text‐independent ASI system.
Abstract: Automatic speaker identification (ASI) systems generally fall into two classes: text dependent and text independent. High recognition rates for short utterances (<1 s) are more common for text‐dependent systems since the process benefits from the a priori knowledge of the underlying acoustic‐phonetic stream. Without this added information, text‐independent ASI usually requires long test utterances for averaging out the unknown phonetic variations. In this paper a new technique is introduced to bridge the gap between text‐dependent and text‐independent ASI, which allows for high recognition rates using short utterances in a text‐independent ASI system. Speakers are parametrically represented by a Gaussian mixture probability density function, where the parameters are maximum likelihood estimates obtained from a form of the iterative estimate‐maximize (EM) algorithm [G. J. McLachlan, Mixture Models (Dekker, New York, 1988)]. The components in the mixture model can be considered to represent “hidden” acousti...

Journal ArticleDOI
TL;DR: The new test proposed recently by Fujii is introduced, and its characteristic is studied with respect to the test proposed by Nayak and the sign test by simulation study.
Abstract: This paper reviews dependence models for bivariate survival data, classifying them into the four groups: the shock model, the Freund model, the Clayton model, and the mixture model. The paper then concentrates on the mixture model, discussing the testing problem for the equality of marginal distributions under the Weibull type baseline hazard assumption. The new test proposed recently by Fujii is introduced, and its characteristic is studied with respect to the test proposed by Nayak and the sign test by simulation study.

01 Nov 1990
TL;DR: This work applies statistical pattern recognition concepts to the problem of recursive nonparametric pattern recognition in dynamic environments and uses density estimation to develop decision functions for supervised and unsupervised learning.
Abstract: : A large number of pattern recognition require the ability to recognize patterns within data when the character of the patterns may change with time. Examples of such tasks are remote sensing, autonomous control, and automatic target recognition in a changing environment. Titterington et al give a list of tasks to which mixture models have been applied. Many of these tasks, and their variants, fall into the above categories.) These tasks have a common requirements: the need to recognize new entities as they enter the environment. A pattern recognition system must be able to recognize and develop a representation of a new pattern in the environment as well as to change its representation of the statistics of the pattern dynamically. The adaptive mixtures approach presented here uses density estimation to develop decision functions for supervised and unsupervised learning. Much work in performing density estimation in supervised situations has been done. For the most part, this research has centered on approaches that use a great deal of a priori information about the structure of the data. In this work, we apply statistical pattern recognition concepts to the problem of recursive nonparametric pattern recognition in dynamic environments.