scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Statistical Computation and Simulation in 2001"


Journal ArticleDOI
TL;DR: In this article, the authors considered the maximum likelihood estimation of the different parameters of a generalized exponential distribution and discussed some of the testing of hypothesis problems, and compared their performances through numerical simulations.
Abstract: Recently a new distribution, named as generalized exponential distribution has been introduced and studied quite extensively by the authors. Generalized exponential distribution can be used as an alternative to gamma or Weibull distribution in many situations. In a companion paper, the authors considered the maximum likelihood estimation of the different parameters of a generalized exponential distribution and discussed some of the testing of hypothesis problems. In this paper we mainly consider five other estimation procedures and compare their performances through numerical simulations.

320 citations


Journal ArticleDOI
TL;DR: In this article, the exact forms of means, variances and covariances of order statistics were derived for the generalized exponential distribution with known shape parameter Θ, and the necessary coefficients for the best linear unbiased estimators of the location and scale parameters for Θ = 0.5(0.5)5.
Abstract: In this paper we consider the generalized exponential distribution with known shape parameter Θ. We first derive the exact forms of means, variances and covariances of order statistics. We use these expressions in obtaining the necessary coefficients for the best linear unbiased estimators of the location and scale parameters for Θ=0.5(0.5)5.0 and for sample sizes up to ten. The variances and covariances of these estimators are also given.

131 citations


Journal ArticleDOI
TL;DR: It is concluded that with the modifications, χ2or F approximations to likelihood ratio statistics to compare fractional polynomial models are adequate for practical purposes.
Abstract: Royston and Altman have demonstrated the usefulness of fractional polynomials in regression modelling, and have suggested model selection procedures for choosing appropriate fractional polynomial transformations. We investigate the performance of these procedures with particular regard to overfitting. We find the Type I error rates to be considerably in excess of their nominal value. We propose improvements which we show by simulation work reasonably well. We conclude that with the modifications, χ2or F approximations to likelihood ratio statistics to compare fractional polynomial models are adequate for practical purposes.

101 citations


Journal ArticleDOI
TL;DR: In this article, the covariance matrix of ordinary least squares estimates in a linear regression model when heteroskedasticity is suspected is estimated and Monte Carlo simulation on the white estimator and its variants is performed.
Abstract: This paper considers the issue of estimating the covariance matrix of ordinary least squares estimates in a linear regression model when heteroskedasticity is suspected. We perform Monte Carlo simulation on the White estimator, which is commonly used in. empirical research, and also on some alternatives based on different bootstrapping schemes. Our results reveal that the White estimator can be considerably biased when the sample size is not very large, that bias correction via bootstrap does not work well, and that the weighted bootstrap estimators tend to display smaller biases than the White estimator and its variants, under both homoskedasticity and heteroskedasticity. Our results also reveal that the presence of (potentially) influential observations in the design matrix plays an important role in the finite-sample performance of the heteroskedasticity-consistent estimators.

50 citations


PatentDOI
Bon K. Sy1
TL;DR: In this paper, an algorithm that employs an information-theoretic based approach is used in which a scalable combinatoric search is utilized as defined by an initial solution and null vectors.
Abstract: A system and method for the discovery and selection of an optimal probability model. Probability model selection can be formulated as an optimization problem with linear order constraints and a non-linear objective function. In one aspect, an algorithm that employs an information-theoretic based approach is used in which a scalable combinatoric search is utilized as defined by an initial solution and null vectors. The theoretical development of the algorithm has led to a property that can be interpreted semantically as the weight of evidence in information theory.

46 citations


Journal ArticleDOI
TL;DR: In this article, the small-sample bias and root mean squared error of several distribution-free estimators of the variance of the sample median are examined, and a new estimator is proposed that is easy to compute and tends to have the smallest bias.
Abstract: The small-sample bias and root mean squared error of several distribution-free estimators of the variance of the sample median are examined. A new estimator is proposed that is easy to compute and tends to have the smallest bias and root mean squared error.

44 citations


Journal ArticleDOI
TL;DR: In this article, the shape and precision parameters of the Pareto distribution were derived from censored data and the reliability function was estimated using an approximation form due to Tierney and Kadane (1986) for obtaining the Bayes estimates.
Abstract: Progressively censored data from a classical Pareto distribution are to be used to make inferences about its shape and precision parameters and the reliability function. An approximation form due to Tierney and Kadane (1986) is used for obtaining the Bayes estimates. Bayesian prediction of further observations from this distribution is also considered. When the Bayesian approach is concerned, conjugate priors for either the one or the two parameters cases are considered. To illustrate the given procedures, a numerical example and a simulation study are given.

31 citations


Journal ArticleDOI
TL;DR: The algorithm follows from a new parametrization, and reduces the problem to a root finding procedure that can be implemented efficiently using a bisection or a Newton-Raphson method, and is fast enough to be implemented in a real-time setting.
Abstract: We describe an algorithm to fit an SU -Curve of the Johnson system by moment matching. The algorithm follows from a new parametrization, and reduces the problem to a root finding procedure that can be implemented efficiently using a bisection or a Newton-Raphson method. This allows the four parameters of the Johnson curve to be determined to any desired degree of accuracy, and is fast enough to be implemented in a real-time setting. A practical application of the method lies in the fact that many firms use the Johnson system to manage financial risk

28 citations


Journal ArticleDOI
TL;DR: In this article, the authors introduce a class of spatial point processes called interacting neighbour point (INP) processes, where the density of the process can be written by means of local interactions between a point and subsets of its neighbourhood but where the processes may not be Ripley-Kelly Markov processes with respect to this neighbourhood.
Abstract: We introduce a class of spatial point processesinteracting neighbour point (INP) processes, where the density of the process can be written by means of local interactions between a point and subsets of its neighbourhood but where the processes may not be Ripley-Kelly Markov processes with respect to this neighbourhood. We show that the processes are iterated Markov processes defined by Hayat and Gubner (1996). Furthermore, we pay special attention to a subclass of interacting neighbour processes, where the density belongs to the exponential family and all neighbours of a point affect it simultaneously. A simulation study is presented to show that some simple processes of this subclass can produce clustered patterns of great variety. Finally, an empirical example is given.

26 citations


Journal ArticleDOI
TL;DR: In this paper, the authors show that the superiority of the generalized median estimators remains valid even for small sample sizes n=10 and 25, and also include the cases n=50 and 100.
Abstract: Robust estimation of tail index parameters is treated for (equivalent) two-parameter Pareto and exponential models. These distributions arise as parametric models in actuarial science, economics, telecommunications, and reliability, for example, as well as in semiparametric modeling of upper observations in samples from distributions which are regularly varying or in the domain of attraction of extreme value distributions. In a recent previous paper, new estimators of "generalized median" (GM) type were introduced and shown to provide more favorable trade-offs between efficiency and robustness than several well-established estimators, including those corresponding to methods of maximum likelihood, trimming, and quantiles. Here we establish-via simulation-that the superiority of the GM type estimators remains valid even for small sample sizes n=10 and 25. To bridge between "small" and "large" sample sizes, we also include the cases n=50 and 100. Further, we arrive at guidelines for selection of a particlar...

23 citations


Journal ArticleDOI
TL;DR: In this article, a comparison study between the maximum likelihood method, the unbiased estimates which are linear functions of the maximum-likelihood method, and the method of quantile estimates is presented, and a simulation study is given to demonstrate the small sample properties.
Abstract: Exponential distributions are used extensively in the field of life-testing. Estimation of parameters is revisited in two-parameter exponential distributions. A comparison study between the maximum likelihood method, the unbiased estimates which are linear functions of the maximum likelihood method, the method of product spacings, and the method of quantile estimates are presented. Finally, a simulation study is given to demonstrate the small sample properties

Journal ArticleDOI
TL;DR: In this article, the authors used Monte Carlo methods together with the bootstrap critical values to detect the presence of cointegration vector(s) in a VAR system and compared the performance of Trace and Lmax test methods.
Abstract: Using Monte Carlo methods together with the bootstrap critical values, we have studied the properties of two tests (Trace and Lmax), derived by Johansen (1988) for testing for cointegration in VAR systems. Regarding the size of the tests, the results show that both of the test methods perform satisfactorily when there are mixed stationary and nonstationary components in the model. The analyses of the power functions indicate that both of the test methods can effectively detect the presence of cointegration vector(s). Finally, when considering the size and power properties, we could not find any noticeable differences between the two test methods.

Journal ArticleDOI
TL;DR: Some guidelines are given for the size of a computer experiment and a graph is provided relating the error of prediction to the sample size which should be of use when designing computer experiments.
Abstract: Computer Experiments, consisting of a number of runs of a computer model with different inputs, are now common-place in scientific research. Using a simple fire model for illustration some guidelines are given for the size of a computer experiment. A graph is provided relating the error of prediction to the sample size which should be of use when designing computer experiments. Methods for augmenting computer experiments with extra runs are also described and illustrated. The simplest method involves adding one point at a time choosing that point with the maximum prediction variance. Another method that appears to work well is to choose points from a candidate set with maximum determinant of the variance covariance matrix of predictions.

Journal ArticleDOI
TL;DR: In this article, an alternative nonparametric procedure to estimate the ROC curve is suggested which is based on local smoothing techniques and several numerical examples are presented to evaluate the performance of this procedure.
Abstract: ROC curve is a graphical representation of the relationship between sensitivity and specificity of a diagnostic test. It is a popular tool for evaluating and comparing different diagnostic tests in medical sciences. In the literature,the ROC curve is often estimated empirically based on an empirical distribution function estimator and an empirical quantile function estimator. In this paper an alternative nonparametric procedure to estimate the ROC Curve is suggested which is based on local smoothing techniques. Several numerical examples are presented to evaluate the performance of this procedure.

Journal ArticleDOI
TL;DR: In this article, generalized EM algorithms (GEM) are considered for the two cases of t and slash families of distributions, and a one step method is proposed to estimate the degree of freedom parameter.
Abstract: This paper introduces practical methods of parameter and standard error estimation for adaptive robust regression where errors are assumed to be from a normal/independent family of distributions. In particular, generalized EM algorithms (GEM) are considered for the two cases of t and slash families of distributions. For the t family, a one step method is proposed to estimate the degree of freedom parameter. Use of empirical information is suggested for standard error estimation. It is shown that this choice leads to standard errors that can be obtained as a by-product of the GEM algorithm. The proposed methods, as discussed, can be implemented in most available nonlinear regression programs. Details of implementation in SAS NLIN are given using two specific examples.

Journal ArticleDOI
TL;DR: In this article, the authors introduce a simple method for generating pairs of correlated binary data, based on the idea that correlations among the random variables arise as a result of their sharing some common components that induce such correlations.
Abstract: Correlated binary data arise frequently in medical as well as other scientific disciplines; and statistical methods, such as generalized estimating equation (GEE), have been widely used for their analysis. The need for simulating correlated binary variates arises for evaluating small sample properties of the GEE estimators when modeling such data. Also, one might generate such data to simulate and study biological phenomena such as tooth decay or periodontal disease. This article introduces a simple method for generating pairs of correlated binary data. A simple algorithm is also provided for generating an arbitrary dimensional random vector of non-negatively correlated binary variates. The method relies on the idea that correlations among the random variables arise as a result of their sharing some common components that induce such correlations. It then uses some properties of the binary variates to represent each variate in terms of these common components in addition to its own elements. Unlike most p...

Journal ArticleDOI
TL;DR: In this article, robust M-estimators of location and over-dispersion for independent and identically distributed samples from Poisson and Negative Binomial (NB)distributions were investigated.
Abstract: We investigate robust M-estimators of location and over-dispersion for independent and identically distributed samples from Poisson and Negative Binomial (NB)distributions. We focus on asymptotic and small-sample efficiencies, outlier-induced biases, and biases caused by model mis-specification. This is important information for assessing the practical utility of the estimation method. Our results demonstrate that resonably efficient estimation of location and over-dispersion parameters for count data is possible with samples sizes as small as n=25. The sensitivity of these stimators, especially when the amount of over-dispersion is small. We aslo conclude that serious biases result when using robust Poisson M-estimation with NB data. The biases are less serious when using robust NB M-estimation with Poisson data.

Journal ArticleDOI
TL;DR: In this paper, the authors revisited the Bayesian analysis of incomplete categorical data under informative general censoring proposed by Paulino and Pereira (1995) and showed how a Monte Carlo simulation approach based on an alternative parameterisation can be used to overcome the former computational difficulties.
Abstract: In this paper the Bayesian analysis of incomplete categorical data under informative general censoring proposed by Paulino and Pereira (1995) is revisited. That analysis is based on Dirichlet priors and can be applied to any missing data pattern. However, the known properties of the posterior distributions are scarce and therefore severe limitations to the posterior computations remain. Here is shown how a Monte Carlo simulation approach based on an alternative parameterisation can be used to overcome the former computational difficulties. The proposed simulation approach makes available the approximate estimation of general parametric functions and can be implemented in a very straightforward way.

Journal ArticleDOI
TL;DR: In this article, the authors compare the performance of the four-parameter Kappa distribution and the maximum likelihood estimator of the generalized extreme value (GVE) for modeling available maxima (or minima) data.
Abstract: The generalized extreme-value has been the distribution of choice for modeling available maxima (or minima) data since theory has shown it to be the limiting form of the distribution of extremes. However, fits to finite samples are not always adequate. Hosking (1994) and Parida (1999) suggest the four-parameter Kappa distribution as an alternative. Hosking (1994) developed an L-moment procedure for estimation. Some compromises must be made in practice however, as seen in Parida (1999). L-moment estimators of the four-parameter Kappa distribution are not always computable nor feasible. A simulation study in this paper quantifies the extent of each problem. Maximum likelihood is investigated as an alternative method of estimation and a simulation study compares the performance of both methods of estimation. Finally, further benefits of maximum likelihood are shown when wind speeds From the Tropical Pacific are examined and the weekly maxima for 10 buoys in the area are analyzed.

Journal ArticleDOI
TL;DR: In this article, the authors considered the simultaneous testing of the mean and the variance of a normal distribution and obtained the exact distribution of the likelihood ratio test statistic, which is not available in the literature.
Abstract: In this paper, we consider the simultaneous testing of the mean and the variance of a normal distribution. The exact distribution of the likelihood ratio test statistic is obtained, which is not available in the literature. The critical points of the exact test are reported. We also consider some of the other exact and asymptotic tests. The powers of these tests are compared using the Monte Carlo simulations.

Journal ArticleDOI
TL;DR: In this paper, two bias-reduced estimators for the correlation coefficient of a bivariate exponential distribution were proposed, which are improvements on the estimators presented by Al-Saadi and Young (1980).
Abstract: In this paper, we propose two bias-reduced estimators for the correlation coefficient of a bivariate exponential distribution which are improvements on the estimators presented by Al-Saadi and Young (1980). Monte Carlo simulation is used to compare the performance of all the estimators. We also suggest the use of jackknife method in order to reduce the bias of these estimates. The bias and mean square errors of the estimators are simulated for small, moderate and large sample sizes from which we make some recommendations.

Journal ArticleDOI
TL;DR: In this article, the authors used Monte Carlo Simulation methodology to compare the effectiveness of five multivariate quality control methods, namely Hotelling T 2, Multivariate Shewhart Char, Discriminant Analysis, Decomposition Method, and Multivariate Ridge Residual Chart-developed by Authors, for controlling the mean vector in a multivariate process.
Abstract: In this paper we use Monte Carlo Simulation methodology to compare the effectiveness of five multivariate quality control methods, namely Hotelling T 2, Multivariate Shewhart Char, Discriminant Analysis, Decomposition Method, and Multivariate Ridge Residual Chart-developed by Authors-, for controlling the mean vector in a multivariate process. P-dimensional multivariate normal data generated using different covariance structures. Various amount of shift in the mean vector is induced and the resulting Average Run Length (ARL) is computed. The effectiveness of each method with regard to ARL is discussed.

Journal ArticleDOI
TL;DR: An efficient simulation algorithm for random sequential adsorption of spheres with radii chosen from a (prior) probability distribution is implemented, based on dividing the whole domain in small subcubes of different edge length.
Abstract: An efficient simulation algorithm for random sequential adsorption of spheres with radii chosen from a (prior) probability distribution is implemented. The algorithm is based on dividing the whole domain in small subcubes of different edge length. Samples obtained by this algorithm satisfy the jamming limit propertyi.e., no further sphere can be placed in the final configuration without overlapping. Samples for both discrete and continuous radii distributions are simulated and analyzed, especially jamming coverage, pair correlation functions and posterior radii distributions of the obtained sphere configurations.

Journal ArticleDOI
TL;DR: In this article, a two-staged quasi likelihood is proposed to reduce the bias of the normal-based likelihood formulation without the need of an analytical bias correction, which is validated by simulations and by theoretical analyses, and illustrated using real datasets from the salamander mating experiment.
Abstract: Direct extension of the normal-based likelihood or estimating-equation formulas for estimating the variance components in the generalized linear mixed models has been shown to produce biased estimates. This is partly due to a fundamental limitation of less than-full likelihood methods. The paper addresses the question of how to get as much as possible from solely the first and second-order properties of the data without going to a full-likelihood system. We first analyse the source of bias in the working vector and the marginal variance formula, and then propose a two-staged quasi likelihood to reduce the bias. The main advantage of the method is that it retains the appealing conceptual and computational simplicity of the normal-based likelihood formulation, without the need of an analytical bias correction. The methodology is validated by simulations and by theoretical analyses, and illustrated using real datasets from the salamander mating experiment.

Journal ArticleDOI
TL;DR: In this paper, bias-corrected sandwich estimators are proposed for the covariance of the least squared coefficient estimator in the linear models and shown to be robust against moderate deviations from the homoscedasticity assumption.
Abstract: Two simple bias-corrected sandwich estimators are proposed for the covariance of the least squared coefficient estimator in the linear models. These estimators are unbiased with homoscedastic errors and are shown to be robust against moderate deviations from the homoscedasticity assumption. Simulation results suggest that one of the proposed estimators produces at most a small bias but with an increased variance while the other produces a smaller mean squared error than the classical estimators such as those of Hinkley (1977), White (1980), and Furno (1997) in both cases of homoscedastic and heteroscedastic errors.

Journal ArticleDOI
TL;DR: In this article, the MAPLE program is used to determine the exact percentage points of the pivotal quantity based on the best linear unbiased estimator, based on doubly Type-II censored samples.
Abstract: In this paper, we make use of an algorithm of Huffer and Lin (2000) in order to develop exact interval estimation for the scale parameter to of an exponential distribution based on doubly Type-II censored samples. We also evaluate the accuracy of a chi-square approximation proposed by Balakrishnan and Gupta (1998). We present the MAPLE program for the determination of the exact percentage points of the pivotal quantity based on the best linear unbiased estimator. Finally, we present a couple of examples to illustrate the method of inference developed here.

Journal ArticleDOI
TL;DR: An upper bound for the distribution function of quadratic forms in normal vector with mean zero and positive definite covariance matrix is presented and it is shown that the new upper bound is more precise than the one introduced by Okamoto and that of Siddiqui.
Abstract: In this paper the researchers are presenting an upper bound for the distribution function of quadratic forms in normal vector with mean zero and positive definite covariance matrix. They also will show that the new upper bound is more precise than the one introduced by Okamoto [4] and the one introduced by Siddiqui [5]. Theoretical Error bounds for both, the new and Okamoto upper bounds are derived in this paper. For larger number of terms in any given positive definite quadratic form, a rough and easier upper bound is suggested.

Journal ArticleDOI
TL;DR: In this paper, a boundary corrected cubic smoothing spline is developed in a way that produces a uniformly fourth order estimator, which can be calculated efficiently using an O(n) algorithm that is designed for the computation of fitted values and associated smoothing parameter selection criteria.
Abstract: Smoothing splines are known to exhibit a type of boundary bias that can reduce their estimation efficiency In this paper, a boundary corrected cubic smoothing spline is developed in a way that produces a uniformly fourth order estimator The resulting estimator can be calculated efficiently using an O(n) algorithm that is designed for the computation of fitted values and associated smoothing parameter selection criteria A simulation study shows that use of the boundary corrected estimator can improve estimation efficiency in finite samples Applications to the construction of asymptotically valid pointwise confidence intervals are also investigated

Journal ArticleDOI
TL;DR: In this paper, the authors provided full posterior analysis of three parameter lognormal distribution using Gibbs sampler, an important and useful Markov chain Monte Carlo technique in Bayesian computation.
Abstract: The paper provides full posterior analysis of three parameter lognormal distribution using Gibbs Sampler, an important and useful Markov chain Monte Carlo technique in Bayesian computation. The extension of the algorithm is given to cover the case of censored data. It has been found that the censoring which creates special problem in the analysis of lognormal model with non-closed form cdf, can be routinely tackled by the use of Gibbs sampler. Suitable numerical illustrations are provided for both complete and censored situations.

Journal ArticleDOI
TL;DR: In this paper, the authors provide alternative methods for fitting symmetry and diagonal-parameters symmetry models to square tables having ordered categories, and demonstrate the implementation of the class of models discussed in Goodman (1979c) using GEN-MOD in SAS.
Abstract: This paper provides alternative methods for fitting symmetry and diagonal-parameters symmetry models to square tables having ordered categories. We demonstrate here the implementation of the class of models discussed in Goodman (1979c) using GEN-MOD in SAS. We also provide procedures for testing hypotheses involving model parameters. The methodology provided here can readily be used to fit the class of models discussed in Lawal and Upton (1995). If desired, composite models can be fitted. Two data sets, the 4 × 4 unaided distance vision of 4746 Japanese students Tomizawa (1985) and the 5 × 5 British social mobility data Glass (1954) are employed to demonstrate the fitting of these models. Results obtained are consistent with those from Goodman (1972, 1979c, 1986) and Tomizawa (1985, 1987).