scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Statistical Computation and Simulation in 1992"


Journal ArticleDOI
TL;DR: In this paper, an adaptive importance sampling (AIS) scheme is introduced to compute integrals of the form as a mechanical, yet flexible, way of dealing with the selection of parameters of the importance function.
Abstract: An Adaptive Importance Sampling (AIS) scheme is introduced to compute integrals of the form as a mechanical, yet flexible, way of dealing with the selection of parameters of the importance function. AIS starts with a rough estimate for the parameters λ of the importance function g , and runs importance sampling in an iterative way to continually update λ using only linear accumulation. Consistency of AIS is established. The efficiency of the algorithm is studied in three examples and found to be substantially superior to ordinary importance sampling.

180 citations


Journal ArticleDOI
TL;DR: In this paper, the effect of autocorrelation on the retrospective Shewhart chart for individuals, often referred to as the X-chart, with the control limits based on moving ranges is shown.
Abstract: Quality control chart interpretation is usually based on the assumption that successive observations are independent over time. In this article we show the effect of autocorrelation on the retrospective Shewhart chart for individuals, often referred to as the X-chart, with the control limits based on moving ranges. It is shown that the presence of positive first lag autocorrelation results in an increased number of false alarms from the control chart. Negative first lag autocorrelation can result in unnecessarily wide control limits such that significant shifts in the process mean may go undetected. We use first-order autoregressive and first-order moving average models in our simulation of small samples of autocorrelated data.

129 citations


Journal ArticleDOI
TL;DR: In this paper, the reliability and failure rate functions are obtained by using Bayes approximation form due to Lindley (1980) and compared with the corresponding estimated risks of the maximum likelihood estimates.
Abstract: Based on a type-2 censored sample of the life times from a two parameter Burr type-XII failure time model, the Bayes estimates of the two (unknown) parameters, the reliability and failure rate functions are obtained by using Bayes approximation form due to Lindley (1980). The estimated risks of the Bayes estimates are computed and compared with the corresponding estimated risks of the maximum likelihood estimates.

81 citations


Journal ArticleDOI
TL;DR: In this paper, the generalized estimating equations are shown to estimate the regression parameters well in small samples of this type, but estimation of the correlation parameters is more difficult, even if the correlation structure is misspecified.
Abstract: Liang and Zeger (1986Biometrika 73 13-22) and Zeger and Liang (1986Biometrics 42 121-30) proposed a generalized estimating equation approach to the estimation of covariate effects on correlated binary outcomes. They showed that estimates of the regression parameters are consistent and asymp¬totically normal even if the correlation structure is misspecified. We present the results of a simulation study designed to evaluate the small sample properties of the estimating equations and to make some comparisons with maximum likelihood. We consider the situation where the block sizes are small and the covariates (block and sub-unit level) are binary. The generalized estimating equations are shown to estimate the regression parameters well in small samples of this type, but estimation of the correlation parameters is more difficult. Misspecification of the correlation structure has some effect on bias and efficiency for small samples (100) but the effect is negligible for larger samples. In these cases, use of th...

71 citations


Journal ArticleDOI
TL;DR: In this paper, two goodness-of-fit statistics for distributional shape are derived from density estimates, one is a form of integrated squared error and the other is an estimate of entropy.
Abstract: Two goodness-of-fit statistics for distributional shape are derived from density estimates One is a form of integrated squared error and the other is an estimate of entropy Both of these statistics are shown to have good power properties when assessing the fit of a Normal distribution The integrated squared error statistic also performs well for von Mises distributions on the circle and provides a useful alternative to the U2 statistic

63 citations


Journal ArticleDOI
TL;DR: In this article, the authors studied the small sample properties of generalized estimating equations for multivariate dichotomous outcomes and proposed a generalized estimating equation (GEE) based generalized estimating algorithm.
Abstract: (1992). On some small sample properties of generalized estimating equationEstimates for multivariate dichotomous outcomes. Journal of Statistical Computation and Simulation: Vol. 41, No. 1-2, pp. 19-29.

58 citations


Journal ArticleDOI
TL;DR: In this paper, several tests for multivariate skewness and kurtosis are compared via Monte Carlo simulation and Small's tests are compared in terms of their power against a wide variety of non-MVN distributions.
Abstract: The examination of coefficients of multivariate skewness and kurtosis is one of the more commonly used techniques for assessing multivariate normality (MVN). In this article, several tests for MVN based on these coefficients are compared via Monte Carlo simulation. The tests considered here include those based on Mardia's affine-invariant measures of multivariate skewness and kurtosis and an omnibus procedure that combines the two. Also included are Small's tests, which are based on combinations of the marginal skewness and kurtosis coefficients and are coordinate-dependent. These tests are compared in terms of their power against a wide variety of non-MVN distributions; included in these alternatives are distributions with iid components, as well as distributions with positively-correlated components. Among the alternatives considered are non-MVN distributions with skewed components, symmetric components, univariate normal components, and MVN values of skewness and kurtosis. The tests considered here per...

54 citations


Journal ArticleDOI
TL;DR: In this paper, a shrinkage preliminary test estimate (SPTE) mean vector was proposed, which can be viewed as a preliminary test estimator improving the usual one given by Ahmed (1987).
Abstract: To estimate the mean vector of a multivariate normal distribution , a random sample of size n 1is used. Suppose a second independent random sample of size n 2from is available and it is a priori suspected that μ (1) = μ (2) may hold. We propose a shrinkage preliminary test estimate (SPTE) mean vector μ (1) that may be viewed as a preliminary test estimator improving the usual one given by Ahmed (1987). This proposed estimator is superior in bias and efficiency to the usual preliminary test estimator (PTE). Furthermore, it dominates the classical estimator in a range that is wider than that of the usual preliminary estimator. The size of the preliminary test for SPTE is much more appropriate .than the PTE.

40 citations


Journal ArticleDOI
TL;DR: Some pseudo-likelihoods, based on the conditional density of a block of pixels, are used together with modified EM algorithms to estimate parameters from noisy images to simulate block updating for Markov random fields.
Abstract: This paper presents a simulation study of block (one line or two lines of pixels) updating for Markov random fields. Point and line relaxation methods are compared. Some pseudo-likelihoods, based on the conditional density of a block of pixels, are used together with modified EM algorithms to estimate parameters from noisy images.

36 citations


Journal ArticleDOI
TL;DR: In this article, the bias to order n -1 for the parameter estimates in a general class of nonlinear regression models is discussed. And diagnostic methods to assess the relationship between bias and observations are presented.
Abstract: In this paper we discuss the computation of the bias to order n -1 for the parameter estimates in a general class of nonlinear regression models. Simple formulae are given to some special models. Diagnostic methods to assess the relationship between bias and observations are presented. Finally the proposed methods are illustrated by two examples.

30 citations


Journal ArticleDOI
TL;DR: In this article, the authors empirically compare the performance of Fisher's linear discriminant analysis (FLDA), quadratic discriminant analyses (QDA), logit analysis, and several rank-based procedures for a variety of symmetric and skewed distributions.
Abstract: Several mathematical programming approaches to the classification problem in discriminant analysis have recently been introduced. This paper empirically compares these newly introduced classification techniques with Fisher's linear discriminant analysis (FLDA), quadratic discriminant analysis (QDA), logit analysis, and several rank-based procedures for a variety of symmetric and skewed distributions. The percent of correctly classified observations by each procedure in a holdout sample indicate that while under some experimental conditions the linear programming approaches compete well with the classical procedures, overall, however, their performance lags behind that of the classical procedures.

Journal ArticleDOI
TL;DR: In this article, the authors investigated the nonparametric estimation of the hazard function from randomly right censored data via nearest neighbor kernel estimators and proposed another solution to this problem incorporating the full information of the censored observations into the calculation of the nearest neighbor distances.
Abstract: The nonparametric estimation of the hazard function from randomly right censored data via nearest neighbour kernel estimators is investigated in this paper. Particular attention is paid to the problem of defining nearest neighbour distances in the case of censored data. It is pointed out that the existing generalizations of nearest neighbour distances from the uncensored to the censored setting in this context suffer from serious drawbacks. We propose another solution to this problem incorporating the full information of the censored observations into the calculation of the nearest neighbour distances. A simulation study comparing the effects of the different approaches on the properties of the resulting hazard function estimators demonstrates that our proposal leads to an estimator with a mean integrated squared error smaller than the corresponding mean integrated squared errors of the other estimators in all situations covered in the study. The magnitude of the reduction is remarkable in almost all situ...

Journal ArticleDOI
TL;DR: In this paper, asymptotic properties of the spatial Yule-Walker (YW) estimators for two-dimensional spatial autoregressive (AR) models are studied.
Abstract: For two-dimensional spatial autoregressive (AR) models, asymptotic properties of the spatial Yule-Walker (YW) estimators (Tjostheim, 1978) are studied. These estimators although consistent, are shown to be asymptotically biased. Estimators from the first-order spatial bilateral AR model are looked at in more detail and the spatial YW estimators for this model are compared with the exact maximum likelihood estimators. Small sample properties of both estimators are also discussed briefly and some simulation results are presented.

Journal ArticleDOI
TL;DR: While extreme right censoring causes the EM to frequently fail to converge, a hybrid-EM algorithm is found to be superior at all levels of right-censoring.
Abstract: Many studies have been made of the performance of standard algorithms used to estimate the parameters of a mixture density, where data arise from two or more underlying populations. While these studies examine uncensored data, many mixture processes are right-censored. Therefore, this paper addresses the accuracy and efficiency of standard and hybrid algorithms under different degrees of right-censored data. While a common belief is that the EM algorithm is slow and inaccurate, we find that the EM generally exhibits excellent efficiency and accuracy. While extreme right censoring causes the EM to frequently fail to converge, a hybrid-EM algorithm is found to be superior at all levels of right-censoring.s

Journal ArticleDOI
TL;DR: In this paper, the authors investigated the robustness of the Kolmogorov-Smirnov test against autocorrelation in time series data and the Shapiro-Wilk analysis-of-variance test for Gaussian first-order autoregressive processes.
Abstract: Robustness against autocorrelation in time-series data is investigated for two tests of normality: the Kolmogorov-Smirnov test, in the class of normality tests using statistics based on the empirical cu¬mulative distribution function, and the Shapiro-Wilk analysis-of-variance test, which regresses the ordered sample values on the corresponding expected normal order statistics. For a Gaussian first-order autoregressive process, it is shown by simulation that: 1. for short series, both tests are conservative for some range of negative values of first-order autocorrelation, and too liberal for medium-to-high positive and high negative values; 2. for moderate sample sizes, both tests are no longer conservative, but remain too liberal asymmetrically for high negative and positive values of first-order autocorrelation; 3. the Kolmogorov-Smirnov test, which traditionally suffers from lack of power in comparisons with the W test of Shapiro and Wilk, is more robust against autocorrelation in time-series data, what...

Journal ArticleDOI
TL;DR: In this article, the authors describe the numerical analysis of continuous univariate probability functions with S-systems, which are computationally efficient nonlinear ordinary differential equations that contain virtually all ordinary differential equation as special cases.
Abstract: This tutorial describes the numerical analysis of continuous univariate probability functions with S-systems. These are computationally efficient nonlinear ordinary differential equations that contain virtually all ordinary differential equations as special cases. After a brief introduction to S-systems, it is shown how central and noncentral probability distributions, as well as auxiliary functions such as Bessel, Gamma, and Beta functions, can be recast equivalently as S-systems. The representation of distributions as S-systems permits rapid computation of function evaluations over wide ranges of random variables, as well as moments, quantiles, power and inverse power. It also offers transformation methods and various options of approximation. The recasting procedure employs elementary mathematics and needs to be executed only once. The tutorial contains a catalogue of recast S-system representations for nearly all relevant distributions and auxiliary functions, and thus enables the reader to evaluate d...

Journal ArticleDOI
TL;DR: In this article, the influence of rows and columns on the eigenvalues obtained in correspondence analysis (CA) of two-way contingency tables is investigated. But the authors do not consider the effect of columns and rows.
Abstract: This paper is concerned with measuring influence of rows and columns on the eigenvalues obtained in correspondence analysis (CA) of two-way contingency tables. As in principal component analysis, the eigenvalues are of great importance in CA. The goodness of a two dimensional correspondence plot is determined by the ratio of the sum of the two largest eigenvalues to the sum of all the eigenvalues. By investigating those rows and columns with high influence, a correspondence plot may be improved. In the paper the influence function (IF) of rows on the eigenvalues is derived along with its sample version, the empirical influence function (EIF). A numerical example is presented to evaluate the EIF. The sample influence function (SIF) is also evaluated to check the adequacy of the EIF.

Journal ArticleDOI
D. S. Coad1
TL;DR: Several allocation rules which are designed to reduce the number of patients who receive the inferior treatment are compared with random allocation using simulation to estimation of the true treatment difference.
Abstract: A clinical trial setting is considered in which two treatments are available for a particular ailment. The response to treatment is Bernoulli with “success” or “failure” as the outcome. Several allocation rules which are designed to reduce the number of patients who receive the inferior treatment are compared with random allocation using simulation. Particular attention is paid to the bias and variance for estimation of the true treatment difference. The effect of time trends in the data is examined.

Journal ArticleDOI
TL;DR: This paper demonstrates that using more than the 2/3 power of the period leads to excessive uniformity compared to a true random sequence.
Abstract: There is a very old, but apparently unpublished and possibly unjustified, rule that a single simulation should never use more pseudorandom numbers than the square root of the generator's period. This paper demonstrates that using more than the 2/3 power of the period leads to excessive uniformity compared to a true random sequence.

Journal ArticleDOI
TL;DR: In this article, a scale-ratio test based on the ratio of two M-estimates of scale is proposed to identify the significant effects in 2k factorial designs.
Abstract: In this paper we consider the problem of testing the effects in 2k factorial designs when there are no replicates and an independent estimate of the variance is not available. First we consider the use of the so-called scale-ratio tests for testing the null hypothesis of no significant effects; these tests are similar in spirit to the ‘global’ F-test commonly used in the classical ANOVA setup. The class of Pitman efficient scale-ratio tests is derived and a new Pitman efficient test based on the ratio of two M-estimates of scale is proposed. The new global test is compared via Monte Carlo with those proposed Paulson (1952) and Ferguson (1961) and shown to represent a substantial improvement when the fraction of significant effects is large. Next, we consider the problem of identifying the significant effects. A forward stepwise procedure based on the new test is proposed and shown (via Monte Carlo) to perform fairly well. Unlike other forward procedures, the present one is unaffected by masking because th...

Journal ArticleDOI
TL;DR: In this article, a method for computing the standard errors for estimated parameters of a normal mixture model fitted to grouped truncated data is provided for computing an estimate of the information matrix in terms of quantities computed during an implementation of the EM algorithm.
Abstract: A method is provided for computing the standard errors for estimated parameters of a normal mixture model fitted to grouped truncated data. An estimate of the information matrix is obtained in terms of quantities computed during an implementation of the EM algorithm. This estimated information matrix is also used to enhance the convergence rate of the EM algorithm using a Newton-type step procedure. A comparison is made of this enhanced procedure with the original procedure using two sets of data each involving a two component mixture, one having mixing proportions almost equal, and the other with the mixing proportions in a ratio close to four to one.

Journal ArticleDOI
Abstract: The problem of estimating a polychoric correlation coefficient from a latent bivariate normal distribution is considered from a Bayesian viewpoint. Using the Gibbs sampling algorithm, one can simulate the joint posterior distribution of the correlation coefficient and row and column thresholds and estimate any marginal posterior density of interest. This algorithm is generalized to handle bivariate lognormal and bivariate tlatent distributions. The methods are illustrated for two examples.

Journal ArticleDOI
Naomi Altman1
TL;DR: In this paper, the simultaneous estimation of the regression and correlation functions is explored, and an iterative technique analogous to the iterated Cochrane-Orcutt method for linear regression is shown to perform well.
Abstract: Altman (1990) and Hart (1991) have shown that kernel regression can be an effective method for estimating an unknown mean function when the errors are correlated However, the optimal bandwidth for kernel smoothing depends strongly on the correlation function, as do confidence bands for the regression curve In this paper, the simultaneous estimation of the regression and correlation functions is explored An iterative technique analogous to the iterated Cochrane-Orcutt method for linear regression (Cochrane and Orcutt, 1949) is shown to perform well However, for moderate sample sizes, stopping after the first iteration produces better results An interesting feature of the simultaneous method is that it performs best when different kernels are used to estimate the regression and correlation functions For the regression function, unimodal kernels are known to be optimal However, examination of the mean squared error of the correlation estimator suggests that a bimodal kernel will perform better for est

Journal ArticleDOI
Luc Devroye1
TL;DR: In this paper, uniformly fast random variate generators for Sibuya's digamma and trigamma families were derived based on the close resemblance between these distributions and selected generalized hypergeometric distributions.
Abstract: We derive uniformly fast random variate generators for Sibuya's digamma and trigamma families. Some of these generators are based upon the close resemblance between these distributions and selected generalized hypergeometric distributions. The generators can also be used for the discrete stable distribution, the Yule distribution, Mizutani's distribution and the Waring distribution.

Journal ArticleDOI
TL;DR: In this paper, the potential usefulness of estimated fractal dimension in testing for white noise is discussed, and a method for fractal interpolation of a continuous process from a finite number of observations is discussed.
Abstract: A fractal and its dimension has been a subject of great mathematical interest since the publication of Mandelbrot's manifestoes (1977, 1982). This paper discusses some empirical results indicating the potential usefulness of estimated fractal dimension in testing for white noise. These tests are applied for model identification in time series, and results for previously analyzed data are provided. A method for fractal interpolation of a continuous process from a finite number of observations is discussed, as well as some future research directions.


Journal ArticleDOI
TL;DR: This work uses different Information Theoretic Criteria to detect the number of signals and compare its small sample properties by Monte Carlo Simulation study, and considers the estimation of theNumber of signals in exponential signals models.
Abstract: We consider the estimation of the number of signals in exponential signals models. We use different Information Theoretic Criteria to detect the number of signals and compare its small sample properties by Monte Carlo Simulation study.

Journal ArticleDOI
TL;DR: In this article, a general version of the operational subjective theory of probability, from the viewpoint established by Bruno de Finetti, provides the solutions to probability problems in terms of the results of a pair of specific linear or nonlinear programming problems.
Abstract: The fundamental theorem of the operational subjective theory of probability, from the viewpoint established by Bruno de Finetti, provides the solutions to probability problems in terms of the results of a pair of specific linear or nonlinear programming problems. We state a general version of this theorem in a computationally feasible form and present numerical examples of its use. The examples display interesting extensions of the Bienayme-Chebyshev inequality and a variation on the Kolmogorov inequality in the context of finite discrete quantities. The Bienayme-Chebyshev application is extended to exemplify the use of a nonlinear programming algorithm to resolve a common question regarding coherent inference. In concluding discussion, we comment on the sizes of realistic problems and suggest a variety of applications for such computations, among them the safety assessment of complex engi-neering systems, the analysis of agricultural production statistics, and a synthesis of subjective judgments in macro...

Journal ArticleDOI
TL;DR: In this paper, a one sample goodness of fit test to a specified survival distribution is proposed, which is an adaptation of one proposed by Hollander and Proschan (1979) and is more appropriate if it is assumed that the failure distribution and censoring distribution had hazard functions that are proportional.
Abstract: In this paper we consider a one sample goodness of fit test to a specified survival distribution. It is an adaptation of one proposed by Hollander and Proschan (1979). The adaptation involves changing the statistic in a way that would be more appropriate if it is assumed that the failure distribution and censoring distribution had hazard functions that are proportional.

Journal ArticleDOI
TL;DR: In this paper, the first two sample moments for the two index parameters of the beta density (known end-points) are studied and four moments of these estimators are set up using Computer Oriented Extended Taylor Series (COETS) to 60 terms followed by rational fraction approximations.
Abstract: Moment estimators, based on the first two sample moments, for the two index parameters of the beta density (known end-points) are studied. Four moments of these estimators are set up using Computer Oriented Extended Taylor Series (COETS) to 60 terms followed by rational fraction approximations. These indicate, over a limited parameter space, that allowing for simplicity of calculation and other characteristics they are preferable to maximum likelihood estimators.