scispace - formally typeset
Search or ask a question

Showing papers in "Communications in Statistics - Simulation and Computation in 1990"


Journal ArticleDOI
TL;DR: In this paper, three modified exponentially weighted moving average (EWMA) control charts are developed for monitoring the Poisson counts of a production process and the average run length (ARL) and the probability function of the run length of these modified control charts can be computed exactly using results from the Markov Chain theory.
Abstract: In certain production processes, it is necessary or more convenient to use counts of defects or conformance per unit of measurement to indicate whether a production process is in control or not. Counts of this kind are often well fitted by a Poisson distribution. Three modified exponentially weighted moving average (EWMA) control charts are developed in this paper for monitoring the Poisson counts. The average run length (ARL) and the probability function of the run length of these modified control charts can be computed exactly using results from the Markov Chain theory. These modified control charts are demonstrated to be generally superior than the Shewhart control chart based on ARL consideration. Tables of in-control ARLs of these modified control charts are given to assist the implementation of these modified control charts. The implementation and design of these EWMA control charts are discussed. The use of these modified EWMA control charts is illustrated with an example.

91 citations


Journal ArticleDOI
TL;DR: In this article, a method is developed to directly obtain maximum likelihood estimates of symmetric stable distribution parameters, which is a difficult estimation problem since the likelihood function is expressed as an integral.
Abstract: A method is developed to directly obtain maximum likelihood estimates of symmetric stable distribution parameters. This is a difficult estimation problem since the likelihood function is expressed as an integral. The estimation routine is tested on a Monte Carlo sample and produces reasonable estimates.

80 citations


Journal ArticleDOI
TL;DR: There are two basic approaches for examining the contribution of in-dividual variables to separation of groups after rejection of a multivariate hypothesis: (1) showing the contributions of each variable in the presence of the other variables, and (2) a univariate approach showing the contribution by itself ignoring the other vari-ables as discussed by the authors.
Abstract: There are two basic approaches for examining the contribution of in-dividual variables to separation of groups after rejection of a multivariate hypothesis: (1) a multivariate approach showing the contribution of each variable in the presence of the other variables, and (2) a univariate approach showing the contribution of each variable by itself ignoring the other vari-ables. For the multivariate approach, we express the standardized (canonical) discriminant function coefficients in a form showing that the contribution of each variable is due to its multiple correlation with the other variables and how well its separation of groups can be predicted from the other variables. For the univariate approach, we present the results of a Monte Carlo study comparing four methods of testing individual variables. A “protected” pro-cedure that performs F tests only if the overall Wilks' A rejects, appears to be preferred.

51 citations


Journal ArticleDOI
TL;DR: In this article, the feasibility of maximum likelihood estimation of a mixture of two Gompertz distributions when censoring occurs is examined, through simulation of follow-up data, and the relationship of these variances with sample size, proportion censored, mixing proportion and population parameters are conside...
Abstract: In estimating the proportion ‘cured’ after adjuvant treatment, a population of cancer patients can be assumed to be a mixture of two Gompertz subpopulations, those who will die of other causes with no evidence of disease relapse and those who will die of their primary cancer. Estimates of the parameters of the component dying of other causes can be obtained from census data, whereas maximum likelihood estimates for the proportion cured and for the parameters of the component of patients dying of cancer can be obtained from follow-up data. This paper examines, through simulation of follow-up data, the feasibility of maximum likelihood estimation of a mixture of two Gompertz distributions when censoring occurs. Means, variances and mean square error of the maximum likelihood estimates and the estimated asymptotic variance-covariance matrix is obtained from the simulated samples. The relationship of these variances with sample size, proportion censored, mixing proportion and population parameters are conside...

48 citations


Journal ArticleDOI
TL;DR: In this paper, the asymptotic behavior of diffusions consisting of various systems of mean-field type interacting particles is studied, including propagation of chaos, critical and non critical fluctuations, using the Laplace method which has been developed by Bolthausen in the general Banach valued random variables context.
Abstract: This article concerns the asymptotic behaviour of diffusions consisting of various systems of mean-field type interacting particles. We specifically study the following properties: propagation of chaos; critical and non critical fluctuations, using the Laplace method which has been developed by Bolthausen in the general Banach valued random variables context; and finally the large deviations of laws of trajectorial empirical measures. We also study Gibbs variational formula associated with this problem and show that it reduces to a finite dimensional problem

42 citations


Journal ArticleDOI
TL;DR: In this article, three linear methods for estimating parameter values of autoregressive moving average models are presented, which can be used to alleviate the problems of autocorrelation in regression, and to generate estimates for multiple times series models.
Abstract: Three linear methods for estimating parameter values of autoregressive moving average models are presented in this article. Simulation results based on different model structures with varying number of observations suggest that the accuracy of some of these procedures is comparable to maximum likelihood estimation. Versions of these approaches can be implemented on any computer system, micro or mainframe, without any programming effort provided that a linear regression package is available. They can also be used to alleviate the problems of autocorrelation in regression, and to generate estimates for multiple times series models. Examples from economic data are used to illustrate the procedures’ capabilities.

38 citations


Journal ArticleDOI
TL;DR: In this paper, the common measures of process capability, usually indicated by Cp, CPU, CPL, and Cpk, are considered and lower confidence limits are given on these measures where the sample mean and the sample range (properly adjusted by d2) are substituted for the population mean and population standard deviation in the definition formulas.
Abstract: The common measures of process capability, usually indicated by Cp, CPU, CPL, and Cpk are considered. Tables of lower confidence limits are given on these measures where the sample mean, and the sample range (properly adjusted by d2) are substituted for the population mean and population standard deviation in the definition formulas.

38 citations


Journal Article
TL;DR: In this article, the Projection Pursuit Reference Reference STAP-ARTICLE-1990-001 Record created on 2006-04-04, modified on 2017-05-12
Abstract: Keywords: Projection Pursuit Reference STAP-ARTICLE-1990-001 Record created on 2006-04-04, modified on 2017-05-12

35 citations


Journal ArticleDOI
TL;DR: This paper showed that the fractional part of a sum of independent near-uniform variables is closer in distribution to a U(0,l) variate than each of the components of the component nearuniform variable.
Abstract: A very useful result for generating random numbers is that the fractional part of a sum of independent U(0,1) random variables is also a U(0,l) random variable. In this paper we show that a more general result is true: the fractional part of a sum of n independent random variables, one of which is U(0,l), is also U(0,l). Moreover, we show that the fractional part of a sum of independent near-uniform variables is closer in distribution to a U(0,l) variate than each of the component near-uniform variables. These results are used to characterize the uniform distribution and to give some justification for an algorithm of Wichmann and Hill(1982). In addition, we show how the property of “closeness” carries over to the generation of any random variable.

32 citations


Journal ArticleDOI
TL;DR: In this article, the power of the Pearson chi-squared test may vary substantially with the value of k, and the effects of different values of k are investigated with a Monte Carlo power study of goodness-of-fit tests for distributions where location and scale parameters are estimated from the observed data.
Abstract: To use the Pearson chi-squared statistic to test the fit of a continuous distribution, it is necessary to partition the support of the distribution into k cells. A common practice is to partition the support into cells with equal probabilities. In that case, the power of the chi-squared test may vary substantially with the value of k. The effects of different values of k are investigated with a Monte Carlo power study of goodness-of-fit tests for distributions where location and scale parameters are estimated from the observed data. Allowing for the best choices of k, the Pearson and log-likelihood ratio chi-squared tests are shown to have similar maximum power for wide ranges of alternatives, but this can be substantially less than the power of other well-known goodness-of-fit tests.

27 citations



Journal ArticleDOI
TL;DR: This paper describes a detailed tree-structured problem-oriented classification system appropriate for use in mathematical problem solving environments of the future.
Abstract: A vast collection of reusable mathematical and statistical software is now available for use by scientists and engineers in their modeling efforts. This software represents a significant source of mathematical expertise, created and maintained at considerable expense. Unfortunately, the collection is so heterogeneous that it is a tedious and error-prone task simply to determine what software is available to solve a given problem. In mathematical problem solving environments of the future such questions will be fielded by expert software advisory systems. One way for such systems to systematically associate available software with the problems they solve is to use a problem classification system. In this paper we describe a detailed tree-structured problem-oriented classification system appropriate for such use.

Journal ArticleDOI
TL;DR: In this article, three estimators to detect a possible change in the sequence of random variables are proposed and the results of a Monte Carlo simulation study of the estimators are presented.
Abstract: The univariate changepoint problem has been extensively studied in the literature. Several investigators have developed test statistics and estimators (both parametric and nonparametric) in classical and Bayesian frameworks. Extensions to multivariate changepoint problems have also been considered. Most such papers have assumed that the variables have continuous distributions. In our study we consider the at most one changepoint(AMOC) problem in a sequence of multinomial random variables. Three estimators to detect a possible change in the sequence are proposed. The results of a Monte Carlo simulation study of the estimators are presented.

Journal ArticleDOI
TL;DR: In this paper, a Monte Carlo simulation is used to study the performance of the Wald, likelihood ratio and Lagrange multiplier tests for regression coefficients when least absolute value regression is used, especially when certain computational advantages are considered.
Abstract: A Monte Carlo simulation is used to study the performance of the Wald, likelihood ratio and Lagrange multiplier tests for regression coefficients when least absolute value regression is used The simulation results provide support for use of the Lagrange multiplier test, especially when certain computational advantages are considered

Journal ArticleDOI
TL;DR: In this article, the problem of estimating the value of the intensity function at the current time when data collection is ceased is addressed when the data are time truncated, i.e., when data collecting is stopped at a predetermined time T. The class of multiples of the conditional MLE is suggested and some members are analyzed.
Abstract: The power law process, a nonhomogeneous Poisson process with intensity function µ(t) = (β/θ)(t/θ) , is frequently used to model the occurence of events in time Often, an important quantity is the value of the intensity function at the current time, that is, the time when data collection is ceased In this article, the problem of estimating this quantity is addressed when the data are time truncated, that is, when data collection is stopped at a predetermined time T The class of multiples of the conditional MLE is suggested, and some members are analyzed In addition, the class of estimators formed by first performing a preliminary test of significance on the parameter β is analyzed Expressions for the bias and MSE of these estimators are derived and evaluated for several values of the parameters

Journal ArticleDOI
TL;DR: In this article, the problem of interval estimation of R = P(Y < X) when X and Y are independent gamma random variables is addressed, and ten confidence interval estimators are compared.
Abstract: This paper is concerned with the problem of interval estimation of R = P(Y

Journal ArticleDOI
TL;DR: In this article, the relative merits of the maximum likelihood and the minimum chi-square estimators for a single realization are considered, and a nonlinear least squares estimator is proposed when only macro data are available.
Abstract: Raftery (1985) proposed a higher order Markov model that is parsimonious in terms of number of parameters. The model appears to be useful in many real life situations. However, many important properties of the model have not been investigated. In particular, estimation methods under various sampling situations have not been studied. In this paper the relative merits of the maximum likelihood and the minimum chi-square estimators for a single realization are considered. For other sampling situations, a nonlinear least squares estimator is proposed when only macro data are available. Its small sample properties are studied by simulation. An empirical Bayes estimator for panel data is also considered.

Journal ArticleDOI
TL;DR: A formula is developed for better understanding of the dependence mechanism hidden in copulas and some approximations on copulas are discussed to reveal interesting features of the class.
Abstract: Copulas are useful devices to explain the dependence structure between variables by eliminating the influence of marginals. In this paper we develop a formula for better understanding of the dependence mechanism hidden in copulas and discuss some approximations on copulas. We explore the copula of multivariate normal distribution in detail and try to reveal interesting features of the class.

Journal ArticleDOI
TL;DR: In this paper, Hartmann et al. studied the performances of the two-stage non-eliminating procedure of Bechhofer, Dunnett and Sobel (1954), Gupta and Kim (1984), and the multi-stage eliminating procedure of Paulson (1964) as modified by Hartmann (1990) for selecting the normal population which has the largest mean when the common variance is unknown.
Abstract: We study the performances of the two-stage non-eliminating procedure of Bechhofer, Dunnett and Sobel (1954), the two-stage eliminating procedure of Gupta and Kim (1984) and the multi-stage eliminating procedure of Paulson (1964) as modified by Hartmann (1990) for selecting the normal population which has the largest mean when the common variance is unknown. Constants to implement the procedures are provided. All of these procedures are open and guarantee the indifference-zone probability require-ment of Bechhofer (1954). The following performance characteristics are estimated by Monte Carlo sampling: the achieved probability of a correct selection, the expected number of vector-observations and the expected total number of observations to termination of experimentation. The critical role of the common initial sample size per population is discussed. Guidance is provided as to which procedure to use in different environments.

Journal ArticleDOI
TL;DR: In this article, two computational formulae for computing the cumulative distribution function of the non-central chi-square distribution are given, the first one dealing with the case of any degrees of freedom (odd and even), and the second dealing with odd degrees offreedom.
Abstract: The cumulative distribution function of the non-central chi-square is very important in calculating the power function of some statistical tests. On the other hand it involves an integral which is difficult to obtain. In literature some workers discussed the evaluation and the approximation of the c.d.f. of the non-central chi-square [see references (2)]. In the present work two computational formulae for computing the cumulative distribution function of the non-central chi-square distribution are given, the first one deals with the case of any degrees of freedom (odd and even), and the second deals with the case of odd degrees of freedom. Numerical illustrations are discussed.

Journal ArticleDOI
TL;DR: This paper obtained exact tables of the null distribution of Spearman's footrule for sample sizes n = 4(1)40 by using a certain Markov chain property, and investigated the adequacy of approximations to the distribution.
Abstract: We obtain exact tables of the null distribution of Spearman's footrule for sample sizes n = 4(1)40 by using a certain Markov chain property, and we investigate the adequacy of approximations to the distribution.

Journal ArticleDOI
TL;DR: In this paper, the authors compared several methods for computing robust 1-α confidence intervals for σ 1 2-σ 2 2 and σ 2 2 -σ 2 3 2, where σ is the population variances corresponding to two independent treatment groups.
Abstract: The paper compares several methods for computing robust 1-α confidence intervals for σ 1 2-σ 2 2, or σ 1 2/σ 2 2, where σ 1 2 and σ 2 2 are the population variances corresponding to two independent treatment groups. The emphasis is on a Box-Scheffe approach when distributions have different shapes, and so the results reported here have implications about comparing means. The main result is that for unequal sample sizes, a Box-Scheffe approach can be considerably less robust than indicated by past investigations. Several other procedures for comparing variances, not based on a Box-Scheffe approach, were also examined and found to be highly unsatisfactory although previously published papers found them to be robust when the distributions have identical shapes. Included is a new result on why the procedures examined here are not robust, and an illustration that increasing σ 1 2-σ 2 2 can reduce power in certain situations. Constants needed to apply Dunnett’s robust comparison of means are included.

Journal ArticleDOI
TL;DR: In this article, the authors examined the small sample coverage probability and size of jackknife confidence intervals centered at a Stein-rule estimator, and used a Monte Carlo experiment to explore the coverage probabilities and lengths of nominal 90% and 95% delete-one and infinitesimal confidence intervals.
Abstract: The primary goal of this paper is to examine the small sample coverage probability and size of jackknife confidence intervals centered at a Stein-rule estimator. A Monte Carlo experiment is used to explore the coverage probabilities and lengths of nominal 90% and 95% delete-one and infinitesimal jackknife confidence intervals centered at the Stein-rule estimator; these are compared to those obtained using a bootstrap procedure.

Journal ArticleDOI
TL;DR: In this paper, the authors proposed a nonparametric measure of the efficacy of the ranking, called the net difference in ranks (NDR), which is the sum of the differences in ranks of the paired players in the observed contests that agree with the ranking.
Abstract: The ranking of paired contestants (players) after a series of contests is difficult when every player does not play every other player. In the 1975 JASA Mark Thompson presented a maximum likelihood solution based on the assumption that the probability of any one player defeating any other is a function only of the difference in their ranks. Here the linear approximation to that likelihood is shown to lead to a nonparametric measure of the efficacy of the ranking, called the net difference in ranks (NDR) , which is the sum of the differences in ranks of the paired players in the observed contests that agree with the ranking minus the sum of the differences in ranks in the observed contests that disagree with the ranking (upsets) . The subject is part of a large literature that has been consolidated by H.A. David in The Method of Paired Comparisons (1963, 1988). The method was introduced by the psychophysicist Fechner in 1860 and has been widely applied to sensory testing,

Journal ArticleDOI
TL;DR: In this article, a table and a procedure are given for finding the single sampling attributes plan involving minimum sum of producer's and consumer's risks for specified Acceptable Quality Level and Limiting Quality Level.
Abstract: A table and a procedure are given for finding the single sampling attributes plan involving minimum sum of producer's and consumer's risks for specified Acceptable Quality Level and Limiting Quality Level.

Journal ArticleDOI
TL;DR: In this article, the authors presented simultaneous one-sided tolerance limits for regression models, which are equivalent to 1-sided confidence limits on percentiles, for a specified percentile of the conditional distributions of the dependent variable.
Abstract: Mee, Eberhardt, and Reeve (1989) recently produced tables of factors for simultaneous two-sided tolerance intervals for linear regression. These factors, obtained using numerical quadrature, provide narrower intervals than were previously available. Using identical notation, this article presents simultaneous one-sided tolerance limits for regression models. Since one-sided tolerance limits are equivalent to one-sided confidence limits on percentiles, the bounds proposed here provide simultaneous one-sided confidence limits for a specified percentile of the conditional distributions of the dependent variable. Although this specific problem had not been addressed previously in the literature, several authors have proposed simultaneous two-sided confidence intervals for a specified percentile of the dependent variable in regression (Steinhorst and Bowden 1971 ; Thomas and Thomas 1986; Turner and Bowden 1977). The limits proposed here have several applications to calibration (or discrimination). For example,...

Journal ArticleDOI
TL;DR: In this paper, the conditional maximum likelihood estimator is compared with the maximum likelihood and moment estimators for estimating the dispersion parameter of the negative binomial distribution, and the biases, mean squared errors and the Kullback-Leibler risks of the estimators are examined by simulation studies for a single population and multiple ones with a common parameter.
Abstract: For estimating the dispersion parameter of the negative binomial distribution, the conditional maximum likelihood estimator is compared with the maximum likelihood and moment estimators. The biases, mean squared errors and the Kullback-Leibler risks of the estimators are examined by simulation studies for a single population and multiple ones with a common parameter. We observe that the conditional maximum likelihood estimator is superior to the others in frequency for a single population as a whole, and it is more definite for multiple populations.

Journal ArticleDOI
TL;DR: In this article, the authors introduce a decomposition based on orthogonality and degrees of freedom based on expected mean squares, for non-stochastic k, and note the obvious fallacies of such an approach.
Abstract: It appears to be common practice with ridge regression to obtain a decomposition of the total sum of squares, and assign degrees of freedom, according to established least squares theory. This discussion notes the obvious fallacies of such an approach, and introduces a decomposition based on orthogonality, and degrees of freedom based on expected mean squares, for non-stochastic k.

Journal ArticleDOI
TL;DR: Using elementary boxplots as stimuli, this paper found that box-length and whisker-length judgments are systematically affected by the relation between box-and whisker length, consistent with theory and findings of psychological investigations of the Baldwin illusion.
Abstract: Using elementary box-plots as stimuli, in this empirical investigation it was established that box-length and whisker- length judgments are systematically affected by the relation between box and whisker length. These results are consistent with theory and findings of psychological investigations of the Baldwin illusion (Girgus and Coren (1982); Jordan and Schiano, (1986); Pressey and Smith (1986)).

Journal ArticleDOI
TL;DR: In this article, a method for generating samples from any member of a one-parameter family of bivariate distributions is presented, using this algorithm, a simulation study for six estimators of the dependence parameter (based on Kendall's tau, Spearman's rho and maximum likelihood) is also presented.
Abstract: A method for generating samples from any member of a one-parameter family of bivariate distributions is presented. Using this algorithm, a simulation study for six estimators of the dependence parameter (based on Kendall’s tau, Spearman’s rho and maximum likelihood) is also presented. Maximum likelihood estimators are clearly the best, but are dependent on the marginals. Among the nonparametric estimators, the one based on Kendall’s tau is preferable to the estimator based on Spearman’s rho. Jackknifing does not provide any significative improvement.