scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Statistical Computation and Simulation in 1987"


Journal ArticleDOI
TL;DR: In this paper, various modifications of Levene's test of homogeneity of variance are proposed and evaluated, including the use of Satterthwaite's method for correcting degrees of freedom, data-based power transformations, and computer simulation.
Abstract: Various modifications of Levene's test of homogeneity of variance are proposed and evaluated, including the use of (i) Satterthwaite's method for correcting degrees of freedom, (ii) data-based power transformations, and (iii) computer simulation. Satterthwaite's correction is shown to be effective in controlling the slightly liberal behaviour of Levene's test in small samples. The use of power transformation turns out to make the test extremely liberal and is not recommended. Modifications which employ computer simulation are exact under normality, and one version, at least, is asymptotically robust of nonnormality. They also posses excellent small-sample properties.

36 citations


Journal ArticleDOI
TL;DR: In this paper, the exact distribution of Bartlett's statistic for testing equality of variances for Gaussian models is derived for equal as well as unequal sample sizes, and percentage points have also been tabulated for an equivalent test statistic.
Abstract: In this paper exact distribution of Bartlett's statistic for testing equality of variances for Gaussian models is derived for equal as well as unequal sample sizes. Percentage points have also been tabulated for an equivalent test statistic.

34 citations


Journal ArticleDOI
Xi-Ren Cao1
TL;DR: In this paper, two estimates of the sensitivity based on only one simulation of the system with 0 are investigated, and the basic results are illustrated through some simple examples, and their properties are compared.
Abstract: To estimate the performance sensitivity with respect to a parameter 0 using the Monte Carlo method usually requires two simulations, one for the system with parameter 0 and the other for that with 0 + ▵θ. In this paper, two estimates of the sensitivity based on only one simulation of the system with 0 are investigated. Their properties are compared. The basic results are illustrated through some simple examples.

34 citations


Journal ArticleDOI
TL;DR: In this paper, the Anderson-Darling (AD) statistic was used for testing the composite hypothesis of Gaussianity (normality) for dimensionality 1≦p≦5 when the parameters are estimated from data.
Abstract: This paper presents finite sample percentage points of the Anderson-Darling (AD) statistic for testing the composite hypothesis of Gaussianity (normality) for dimensionality 1≦p≦5 when the parameters are estimated from data. This paper also presents asymptotic percentage points for both the Anderson-Darling and the Cramervon Mises(CM) statistics for testing the composite hypothesis of Gaussianity for dimensionality 1≦p≦25 when the parameters are estimated from data. The AD test is developed from the fact that the quadratic form of the Gaussian density is distributed as chi-squared on p degrees of freedom. A small power study contrasts the finite sample performance of the AD and CM statistics. Several examples are discussed in the context of chi-squared probability plotting.

24 citations


Journal ArticleDOI
TL;DR: In this article, the strategy of removing points via a backwards-stepping outlier detection procedure and then taking the standard deviation of the remaining points is examined as a robust scale estimator via computer simulations.
Abstract: The strategy of removing points via a backwards-stepping outlier detection procedure and then taking the standard deviation of the remaining points is examined as a robust scale estimator via computer simulations. It is shown that this procedure compares favorably with the most effective robust scale estimators. This is particularly true if the outliers are relatively extreme or follow an asymmetric distribution. It is also shown that this strategy results in an estimator with high breakdown and redescending influence.

23 citations


Journal ArticleDOI
TL;DR: In this article, a data-based estimator of a dose-response curve is developed, which requires no assumptions about the functional form of the relationship between response probability and dosage.
Abstract: A data-based estimator of a dose-response curve is developed. The estimator requires no assumptions about the functional form of the relationship between response probability and dosage. Also developed is a procedure, based on this estimator, for estimating the median effective dose. A simulation study was conducted to compare the performances of the procedure and Spearman-Karber type ED50 estimators recommended by previous studies.

21 citations


Journal ArticleDOI
TL;DR: In this paper, the authors derived some Volterra integral equations whose solutions are the first passage time distribution for the Wiener Process, and derived bounds for the crossing probabilities for crossing probabilities.
Abstract: We derive some Volterra integral equations whose solutions are the first passage time distribution for the Wiener Process. Numerical solutions are then discussed, and we find improved rates of convergence over previous results in this area. We also derive bounds for the crossing probabilities.

15 citations


Journal ArticleDOI
TL;DR: In this article, a chi-square type statistic is devised to test the homogeneity of several Paretian Laws and its correctness is assessed numerically by simulating its quantiles and comparing them with the exact quantiles of a Chi-square variate.
Abstract: A chi-square type statistic is devised to test the homogeneity of several Paretian Laws. It is assessed numerically by simulating its quantiles and comparing them with the exact quantiles of a chi-square variate. When the two parameters of the distribution are unknown, a prior with finite probability measure is considered and the Pearson system of curves is used to approximate the posterior distribution of parameters.

14 citations


Journal ArticleDOI
TL;DR: In this article, the mean squared errors of two methods of estimating percentage points are compared, one using a single order statistic from a single large sample and the other averaging the appropriate order statistics from each of several smaller samples.
Abstract: The mean squared errors are compared for two methods of estimating percentage points. One method uses a single order statistic from a single large sample. The other method averages the appropriate order statistics from each of several smaller samples. Four examples are given with finite sample sizes. Asymptotic results are presented for large sample sizes.

12 citations


Journal ArticleDOI
TL;DR: In this article, Monte Carlo techniques are used to study the behavior of the Pearson chi-square statixtic X 2, likelihood ratio statistic G 2, Freeman-Tukey statistic T 2, power divergence statistic I(2/3) suggested by Cressie and Read (1984), and the Wald statistic W for small and moderate sample sizes.
Abstract: This paper is concerned with comparison of chi-square tests for the hypothesis of no three-factor interaction in a I ×J×K contingency table. Monte Carlo techniques are used to study the behavior of the Pearson chi-square statixtic X 2,likelihood ratio statistic G 2, Freeman-Tukey statistic T 2,power divergence statistic I(2/3) suggested by Cressie and Read (1984) and the Wald statistic W for small and moderate sample sizes. Results suggest that in comparison to other test criteria, the Pearson chi-square statistic X 2 and the power divergence statistic I(2/3) attain levels that are quite close to the nominal values. Also, these statistics X 2 and I(2/3) have similar powers. HoweverX 2 has a slight edge over I(2/3).

12 citations



Journal ArticleDOI
TL;DR: In this paper, the convex set of all possible distribution vectors after k periods where the transition matrices are chosen between P and Q is defined, and the long run behavior of Sk is computed from its predecessor.
Abstract: Let P≦ be n✗n nonnegative matrices. Define . We show that Sk is the convex set of all possible distribution vectors after k periods where the transition matrices are chosen between P and Q. We show how to compute each Sk from its predecessor and how to compute the long run behavior lim

Journal ArticleDOI
TL;DR: In this paper, the authors compared the performance of the m.l.s. and other estimators for ED50 by computer simulation of experiments, and calculation of asymptotic properties.
Abstract: Many estimators for the ED50 have been suggested for use in Up-and-Down experiments. Here m.l.e.s. and other estimators are compared by computer simulation of experiments, and calculation of asymptotic properties. Brownlee's "delayed" estimator and Dixon and Mood's estimator have mean squared errors that are usually close to or below that for the other alternatives to m.l.e.s. They have properties similar to those of the m.l.e., but usually have larger biases.


Journal ArticleDOI
TL;DR: Methods for estimating parameter values and for identifying the nonzero elements in vector moving average models which are at least an order of magnitude faster than existing maximum likelihood procedures are presented.
Abstract: In practice the number of nonzero parameters in the polynomial matrices of vector time series models is often small Consequently, estimation of fully parameterized models, particularly those containing many variables, can be computationally quite demanding In this article we present methods for estimating parameter values and for identifying the nonzero elements in vector moving average models which are at least an order of magnitude faster than existing maximum likelihood procedures

Journal ArticleDOI
TL;DR: In this article, the marginal posterior density of the log odds ratios of a 2×2×2 contingency table under a prior hypothesis of independence has been investigated, where the predictive distribution is used to judge the goodness of fit of the prior model.
Abstract: Two estimation problems are considered: (1) the estimation of an odds ratio from a 2×2 contingency table under a prior hypothesis of independence; and (2) the simultaneous estimation of odds ratios from k 2×2 tables, when the odds ratios are believed a priori exchangeable. The predictive distribution is used to judge the goodness-of-fit of the prior model. Some numerical techniques are investigated in the accurate and efficient computation of the marginal posterior density of the log odds ratios. The methods are applied to the analysis of a 2×2×2 table in Bartlett (1935).

Journal ArticleDOI
TL;DR: In this paper, empirical Bayes estimators of variance components were obtained for the balanced random model which contains the balanced nested design and the balanced Random Cross-Classification design without interaction.
Abstract: For the balanced random model which contains the balanced nested design and the balanced random cross-classification design without interaction, we obtain empirical Bayes estimators of variance components. To illustrate the results some simulation studies are also carried out. It turns out that these estimators have smaller mean squared error than the minimum variance unbiased estimators (MVUE). Moreover, in general these estimators give negative values of estimators less frequently than the MVUE.

Journal ArticleDOI
TL;DR: In this article, a method for approximation of cumulative joint probabilities is presented in its multivariate generalization, which utilizes joint moments, through fourth order, of the distribution to be approximated to determine Pearson-type differential equation constants and associated conditional moments.
Abstract: A method for approximation of cumulative joint probabilities is presented in its multivariate generalization. The technique utilizes joint moments, through fourth order, of the distribution to be approximated to determine Pearson-type differential equation constants and associated conditional moments. Nonstandard Gaussian quadrature techniques are employed for integral evaluation, yielding an expression involving only univariate Pearson distribution function values. Under stated conditions, this method produces mathematically exact evaluations of distribution functions that are of the Pearson class. Illustrations are given for the trivariate case.

Journal ArticleDOI
TL;DR: In this article, an inverse Volterra type model for fitting time series data is investigated, based on polyspectral analysis of the process an identification and estimation technique is developed, the algorithm for which is presented here.
Abstract: An inverse Volterra type model for fitting time series data is investigated here. Based on polyspectral analysis of the process an identification and estimation technique is developed, the algorithm for which is presented here. The methodology has been applied to two well-known nonlinear processes viz., the Canadian Lynx Series and the Wolfer's Sunspot Series. A one step ahead predictor is constructed, forecast results show considerable improvement over linear fitting and in some situations may be better than those obtained from bilinear modelling.


Journal ArticleDOI
TL;DR: In this article, Bechhofer and Kulkarni modify the closed adaptive sequential procedures to obtain two procedures for selecting the best of t objects in a curtailed Round Robin-type paired-comparison experiment.
Abstract: Closed adaptive sequential procedures were introduced by Bechhofer and Kulkarni (1982) as a new method for selecting the best of t ( 2) Bernoulli populations. We modify this approach to obtain two procedures for selecting the best of t objects in a curtailed Round Robin-type paired-comparison experiment. Objects are paired sequentially and the experiment is stopped as soon as one object has achieved a number of preferences that no other object can equal (weak curtailment) or surpass (strong curtailment) if the tournament were run to completion. Ties for first place are broken at random, Weak curtailment clearly selects the same object as the complete Round Robin, with generally substantially fewer comparisons. Strong curtailment effects a further appreciable reduction in the average number of comparisons needed. It is shown that the probabilities of selecting a particular object are the same for weak and strong curtailment if the Bradley-Terry model holds, but generally not otherwise. Some comparisons are...



Journal ArticleDOI
TL;DR: In this article, the authors investigated the effect of autocorrelation among errors on the convergence rate of a linear regression model and showed that if the autocorerelation is positive, the consistency of convergence increases with the intensity of autorecorrelation.
Abstract: In a linear regression model even when the errors are autocorrelated and non-normal the ordinary least squares (OLS) estimator of the regression coefficients ( ) converges in probability to β. But the effects of autocorrelation among errors on this rate of convergence are unknown. In this paper, we investigate these effects for the case of a linear trend model. It is shown that the rate of convergence becomes faster if the autocorrelation is negative while it becomes slower if the autocorrelation is positive as compared to when the errors are independent. Thus, if the autocorrelation among errors is negative the consistency of is achieved by the same sample size (even less) as needed when the errors are independent. But if the autocorrelation is positive the sample size needed to achieve the consistency of increases with the intensity of autocorrelation and it can be extremely large for high positive autocorrelations.

Journal ArticleDOI
TL;DR: In epidemiological and occupational health studies, it is not uncommon to find that the distribution of exposure to a quantitative risk factor is highly skewed, as, for example, when most of a population suffers little or no exposure to hazardous agents while a small subset receives very high exposures as discussed by the authors.
Abstract: In epidemiological and occupational health studies it is not uncommon to find that the distribution of exposure to a quantitative risk factor is highly skewed, as, for example, when most of a population suffers little or no exposure to a hazardous agent while a small subset receives very high exposures. As a result the asymptotic significance levels of conditional tests for monotone trends in rates or proportions can be profoundly anticonservative when applied to small or even moderate numbers of events. Monte Carlo (MC) estimation of observed levels of significance (”p-values”) provides a useful method for accurately assessing statistical significance in such situations. We describe a simple technique of importance sampling (IS) which can greatly improve the efficiency of MC estimation in this setting. Use of the IS technique is illustrated with data regarding cancer mortality among atomic bomb survivors.

Journal ArticleDOI
TL;DR: This paper evaluates and compares four algorithms for estimating stationary Markov chain models with embedded parameters from aggregate frequency data and presents an application, using the best-performing algorithm, for U.S. population migration.
Abstract: In this paper, we evaluate and compare four algorithms for estimating stationary Markov chain models with embedded parameters from aggregate frequency data. By means of factorially designed Monte Carlo simulation experiments, we are able to determine the effects of model characteristics on algorithm accuracy and efficiency. We then present an application, using the best-performing algorithm, for U.S. population migration.


Journal ArticleDOI
TL;DR: In this paper, an OLS estimator is used to obtain reasonably accurate estimates of the duration of dynamic effects in a Koyck model framework without knowledge of the true level of temporal aggregation of the data.
Abstract: This paper discusses how an Ordinary Least Squares (OLS) estimator can be used to obtain reasonably accurate estimates of the duration of dynamic effects in a Koyck model framework without knowledge of the true level of temporal aggregation of the data. With proper changes in the analytic derivation, the approach can be extended to other dynamic models.

Journal ArticleDOI
TL;DR: The authors formalizes the doubling-up strategy and develops expressions for success probability and the corresponding number of wins necessary to achieve a specific wealth goal, as well as the expected number of plays given that the goal is achieved.
Abstract: This note formalizes the popular doubling-up (DU) betting strategy. We develop expressions for success probability and the corresponding number of wins necessary to achieve a specific wealth goal, as well as the expected number of plays given that the goal is achieved. Three popular games of chance provide numerical illustration of the process. The doubling-up (DU) betting strategy has been a favorite tactic employed by gamblers throughout the ages. Assume the player is involved in a game which possesses two outcomes (win or lose) and pays even money. He begins by making an initial betB 1 If he wins, the playing sequence terminates, and he starts the next sequence with a second initial wagerB 2. If he loses on the first bet, he bets 2B 1 on the next game (i.e. he“doubles up”). If he loses again, the next wager is 4B 1 and so on until the sequence terminates with a success or bankruptcy. It is clear that the player enters each playing sequence with the hope of winning an amount equal to his initial wager. ...

Journal ArticleDOI
TL;DR: In this paper, a family of estimators based on the Pearson statistic is considered for estimation of the shape parameter in gamma regression models, and approximate estimates to the biases of the estimators are developed and used to define a bias corrected estimator for the case of a logarithmic link for the means.
Abstract: A family of estimators based on the Pearson statistic is considered for estimation of the shape parameter in gamma regression models. Approximations to the biases of the estimators are developed and used to define a bias corrected estimator for the case of a logarithmic link for the means. The new estimator is shown to have markedly better variance and mean square error properties than existing estimators based on the Pearson statistic, in small to moderate size samples.