scispace - formally typeset
Search or ask a question

Showing papers in "Technometrics in 1987"


Journal ArticleDOI
TL;DR: In this paper, a method for producing Latin hypercube samples when the components of the input variables are statistically dependent is described, and the estimate is also shown to be asymptotically normal.
Abstract: Latin hypercube sampling (McKay, Conover, and Beckman 1979) is a method of sampling that can be used to produce input values for estimation of expectations of functions of output variables. The asymptotic variance of such an estimate is obtained. The estimate is also shown to be asymptotically normal. Asymptotically, the variance is less than that obtained using simple random sampling, with the degree of variance reduction depending on the degree of additivity in the function being integrated. A method for producing Latin hypercube samples when the components of the input variables are statistically dependent is also described. These techniques are applied to a simulation of the performance of a printer actuator.

1,750 citations


Journal ArticleDOI
TL;DR: In this paper, the authors show that unless the sample size is 500 or more, estimators derived by either the method of moments or probability-weighted moments are more reliable.
Abstract: The generalized Pareto distribution is a two-parameter distribution that contains uniform, exponential, and Pareto distributions as special cases. It has applications in a number of fields, including reliability studies and the analysis of environmental extreme events. Maximum likelihood estimation of the generalized Pareto distribution has previously been considered in the literature, but we show, using computer simulation, that, unless the sample size is 500 or more, estimators derived by the method of moments or the method of probability-weighted moments are more reliable. We also use computer simulation to assess the accuracy of confidence intervals for the parameters and quantiles of the generalized Pareto distribution.

1,233 citations


Journal ArticleDOI
TL;DR: Univariate Distributions and Their Generation, Multivariate Generation Techniques, and Miscellaneous Distributions.
Abstract: Univariate Distributions and Their Generation. Multivariate Generation Techniques. Multivariate Normal and Related Distributions. Johnson's Translation System. Elliptically Contoured Distributions. Circular, Spherical and Related Distributions. Khintchine Distributions. Miscellaneous Distributions. Research Directions. References. Supplementary References. Index.

692 citations


Journal ArticleDOI
Stephen V. Crowder1
TL;DR: In this paper, a numerical procedure using integral equations is presented for the tabulation of moments of run lengths of exponentially weighted moving average (EWMA) charts, assuming normal observations, along with an example illustrating how to design such a chart.
Abstract: A numerical procedure using integral equations is presented for the tabulation of moments of run lengths of exponentially weighted moving average (EWMA) charts. Both average run lengths (ARL's) and standard deviations of run lengths (SDRL's) are presented for the twosided EWMA chart assuming normal observations, along with an example illustrating how to design such a chart. The procedure given extends easily to many nonnormal cases and to one-sided versions of the EWMA chart.

443 citations


Journal ArticleDOI
TL;DR: In this article, the exact run-length properties of Shewhart control charts with supplementary runs rules were obtained using Markov chains, and a simple and efficient method was given.
Abstract: This article gives a simple and efficient method, using Markov chains, to obtain the exact run-length properties of Shewhart control charts with supplementary runs rules. Average run-length comparisons are made among the Shewhart chart with supplementary runs rules, the basic Shewhart chart, and the cumulative sum (CUSUM) chart.

389 citations


Journal ArticleDOI
TL;DR: In this article, the authors discuss the natural application of cumulative sum procedures to the multivariate normal distribution and discuss two cases, detecting a shift in the mean vector and detecting the covariance matrix.
Abstract: Cumulative sum (CUSUM) procedures are among the most powerful tools for detecting a shift from a good quality distribution to a bad quality distribution. This article discusses the natural application of CUSUM procedures to the multivariate normal distribution. It discusses two cases, detecting a shift in the mean vector and detecting a shift in the covariance matrix. As an example, the procedure is applied to measurements taken on optical fibers.

321 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present an Introduction to Random Processes, With Applications to Signals and Systems, with a focus on the application of random processes to signal and signal processing.
Abstract: (1987). Introduction to Random Processes, With Applications to Signals and Systems. Technometrics: Vol. 29, No. 2, pp. 245-246.

317 citations


Journal ArticleDOI
TL;DR: In this paper, the visual display of quantitative information is used to display the visual properties of a given object in terms of its properties and properties of its visual attributes, including its properties.
Abstract: (1987). The Visual Display of Quantitative Information. Technometrics: Vol. 29, No. 1, pp. 118-119.

278 citations


Journal ArticleDOI
TL;DR: In this article, the results of a Monte Carlo study of the leading methods for constructing approximate confidence regions and confidence intervals for parameters estimated by nonlinear least squares are presented, including linearization method, likelihood method, and lack-of-fit method.
Abstract: We present the results of a Monte Carlo study of the leading methods for constructing approximate confidence regions and confidence intervals for parameters estimated by nonlinear least squares. We examine three variants of the linearization method, the likelihood method, and the lack-of-fit method. The linearization method is computationally inexpensive, produces easily understandable results, and is widely used in practice. The likelihood and lack-of-fit methods are much more expensive and more difficult to report. In our tests, both the likelihood and lack-of-fit methods perform very reliably. All three variants of the linearization method, however, often grossly underestimate confidence regions and sometimes significantly underestimate confidence intervals. The linearization method variant based solely on the Jacobian matrix appears preferable to the two variants that use the full Hessian matrix because it is less expensive, more numerically stable, and at least as accurate. The Bates and Watts curvat...

253 citations



Book ChapterDOI
TL;DR: In this paper, the authors show that for certain underlying models for the product or process response maximization of the signal-to-noise ratio leads to minimization of average quadratic loss.
Abstract: Parameter design is a method, popularized by Japanese quality expert G. Taguchi, for designing products and manufacturing processes that are robust to uncontrollable variations. In parameter design, Taguchi's stated objective is to find the settings of product or process design parameters that minimize average quadratic loss—that is, the average squared deviation of the response from its target value. Yet, in practice, to choose the settings of design parameters he maximizes a set of measures called signal-to-noise ratios. In general, he gives no connection between these two optimization problems. In this article, we show that for certain underlying models for the product or process response maximization of the signal-to-noise ratio leads to minimization of average quadratic loss. The signal-to-noise ratios take advantage of the existence of special design parameters called adjustment parameters. When these parameters exist, use of the signal-to-noise ratio allows the parameter design optimization procedu...

Journal ArticleDOI
TL;DR: In this paper, a new method for assessment of model inadequacy in maximum-likelihood mixed-model analysis of variance is described. But this method is based on the assumption that each realization of a given random factor has been drawn from the same normal population.
Abstract: We describe a new method for assessment of model inadequacy in maximum-likelihood mixed-model analysis of variance. In particular, we discuss its use in diagnosing perturbations from the usual assumption of constant error variance and from the assumption that each realization of a given random factor has been drawn from the same normal population. Computer implementation of the procedure is described, and an example is presented, involving the analysis of filter cartridges used with commercial respirators.

Journal ArticleDOI
Eric R. Ziegel1
TL;DR: In this article, a collection of problems from many fields for the student and research worker is presented, with a focus on the problem of data collection in the context of data mining.
Abstract: (1987). Data: A Collection of Problems From Many Fields for the Student and Research Worker. Technometrics: Vol. 29, No. 4, pp. 502-503.

Journal ArticleDOI
TL;DR: In this paper, the authors reported the application of the annealing algorithm to the construction of exact D−, I−, and G-optimal designs for polynomial regression of degree 5 on the interval [1, l] and for the second-order model in two factors on the design space [ 1, l].
Abstract: This article reports the application of the annealing algorithm to the construction of exact D−, I−, and G-optimal designs for polynomial regression of degree 5 on the interval [—1, l] and for the second-order model in two factors on the design space [—1, l] × [—1, 1]. Details of the perturbation scheme and the annealing schedules used are given, and the method of implementation is illustrated by means of a simple example. The algorithm is assessed by comparing its performance, in terms of computer time and effkiency, with the modified Fedorov procedure, and it is shown to be particularly effective in finding G-optimal designs. The salient features of the exact designs constructed in this study are also summarized.

Journal ArticleDOI
TL;DR: In this paper, a concordance measure that is more sensitive to agreement on the top rankings is provided, and the asymptotic distribution of these statistics are presented, and a summary of the quantiles of the exact distribution for the two sample case for n = 3(1)14.
Abstract: Many situations exist in which n objects are ranked by two or more independent sources, where interest centers primarily on agreement in the top rankings and disagreements on items at the bottom of the rankings are of little or no importance. A problem with Spearman's rho or Kendall's coefftcient of concordance in this setting is that they are equally influenced by disagreement on the assignment of rankings at all levels. In this article, a concordance measure is provided that is more sensitive to agreement on the top rankings. The statistics used in this setting are functions of the ordinary correlation coeRicient computed on Savage (1956) scores. The asymptotic distributions of these statistics are presented, and a summary of the quantiles of the exact distribution for the two sample case are provided for n = 3(1)14. The statistic for the two-sample case is shown to provide a locally most powerful rank test for a model given by Hajek and Sidak (1967).


Journal ArticleDOI
TL;DR: The statistical theory of the ordinary life table is presented in this paper, with its construction explained, and other areas include survival and stages of disease, reproduction, married life, antenatal life table and ecological studies.
Abstract: The statistical theory of the ordinary life table is presented in this book, with its construction explained. Topics cover measures of mortality and adjustment of rates, and other areas include survival and stages of disease, reproduction, married life, antenatal life table and ecological studies.

Journal ArticleDOI
TL;DR: In this article, the intervals between events are modeled as iid exponential (λ i, or the counts as Poisson (λ I t i,) for the ith item, and each individual rate parameter, λ i, is presumed drawn from a fixed (super) population with density g λ (·; θ), θ being a vector parameter.
Abstract: A collection of I similar items generates point event histories; for example, machines experience failures or operators make mistakes. Suppose the intervals between events are modeled as iid exponential (λ i , or the counts as Poisson (λ i t i ,) for the ith item. Furthermore, so as to represent between-item variability, each individual rate parameter, λ i , is presumed drawn from a fixed (super) population with density g λ (·; θ), θ being a vector parameter: a parametric empirical Bayes (PEB) setup. For g λ, specified alternatively as log-Student t(n) or gamma, we exhibit the results of numerical procedures for estimating superpopulation parameters ll and for describing pooled estimates of the individual rates, λ i , obtained via Bayes's formula. Three data sets are analyzed, and convenient explicit approximate formulas are furnished for λ i estimates. In the Student-t case, the individual estimates are seen to have a robust quality.


Journal ArticleDOI
TL;DR: Methods for analyzing life test data from life tests for “limited-failure” populations are outlined and applications are described and a numerical example is included.
Abstract: Failures of solid-state electronic components are often caused by manufacturing defects. Typically, a small proportion of the manufactured components has one or more defects that cannot be detected in a simple inspection but that will eventually cause the component to fail. If a component has no such defects, the probability that it will fail under carefully controlled conditions is virtually 0. By assuming a time-to-failure distribution for the units that are susceptible to failure from manufacturing defects, laboratory life tests of limited duration can be used to estimate the proportion of units that have such defects and the parameters of the assumed time-to-failure distribution of the defective subpopulation. The purpose of this article is to outline methods for analyzing life test data from life tests for “limited-failure” populations. Applications are described and a numerical example is included.

Journal ArticleDOI
TL;DR: In this article, the V test was compared with the likelihood ratio and score tests in a power study and the power comparison with the usual pooled t test was also obtained, as well as the sizes of the V and the pooled t tests.
Abstract: The Wald, likelihood ratio, and score statistics have been calculated for the Behrens–Fisher problem. These statistics provide asymptotically equivalent and weakly optimal tests. The test based on the Wald statistic was found to be very similar to the V test discussed by Welch (1937); hence the V test was compared with the likelihood ratio and score tests in a power study. Since the powers were very similar and the V test was more convenient to use from several points of view, we recommend this test. Sizes of the V test, as well as a power comparison with the usual pooled t test, were also obtained.

Journal ArticleDOI
TL;DR: In this article, the density estimation for statistics and data analysis is presented, which is based on the Density Estimation for Statistics and Data Analysis (DESA) method. But
Abstract: (1987). Density Estimation for Statistics and Data Analysis. Technometrics: Vol. 29, No. 4, pp. 495-495.

Journal ArticleDOI
TL;DR: This edition stresses clarity and accessibility of material, and has intentionally kept notation simple so that readers can gain an understanding of the method of analysis rather than becoming involved in complex and time-consuming computations.
Abstract: Intended for students who have had a course in the analysis of variance and who wish to see the connection between it and multiple regression analysis, this edition stresses clarity and accessibility of material. The author, a well-known expositor of concepts in design and statistics, has intentionally kept notation simple so that readers can gain an understanding of the method of analysis rather than becoming involved in complex and time-consuming computations. End-of-chapter exercises are numerous and simple; most can easily and quickly be accomplished using a hand calculator.

Journal ArticleDOI
TL;DR: In this article, two methods for reducing the computer time necessary to investigate changes in distribution of random inputs to large simulation computer codes are presented, one of which produces unbiased estimators of functions of the output variable under the new distribution of the inputs.
Abstract: Two methods for reducing the computer time necessary to investigate changes in distribution of random inputs to large simulation computer codes are presented. The first method produces unbiased estimators of functions of the output variable under the new distribution of the inputs. The second method generates a subset of the original outputs that has a distribution corresponding to the new distribution of inputs. Efficiencies of the two methods are examined.


Journal ArticleDOI
TL;DR: ISTAT is particularly interesting to any observer of statistical software-the reliance on the operating system and the interaction of the modules makes |STAT an interesting case study.
Abstract: (1989). Nonlinear Parameter Estimation: An Integrated System in BASIC. Technometrics: Vol. 31, No. 1, pp. 114-115.

Journal ArticleDOI
TL;DR: An economic model for this screening procedure is developed with the consideration of the cost incurred by imperfect quality and the cost associated with the disposition of the rejected items.
Abstract: In a complete inspection (screening) procedure, all of the items are subject to acceptance inspection. If an item fails to meet the predetermined inspection specifications, it is rejected and excluded from shipment. When the inspection on the performance variable is destructive or very costly, it may be economical to use another variable that is correlated with the performance variable and is relatively inexpensive to measure as the screening variable. Suppose that the performance variable is a “larger is better” variable and is positively correlated with the screening variable, The items for which the measured values of the screening variable are smaller than the screening specifications are rejected and excluded from shipment. An economic model for this screening procedure is developed with the consideration of the cost incurred by imperfect quality and the cost associated with the disposition of the rejected items. Solution procedures for the optima1 screening specifications are developed for three qua...

Journal ArticleDOI
TL;DR: In this article, bias approximations for the maximum likelihood estimators of the regression coefficients and scale parameter are presented for the normal error and Type I extreme-value error models.
Abstract: A regression model is considered in which the response variable has the generalized log-gamma distribution. Bias approximations for the maximum likelihood estimators of the regression coefficients and scale parameter are presented. The estimator of the scale parameter has a negative bias, which becomes increasingly marked as the number of regressor variates increases. A bias-corrected estimator is proposed that has improved mean squared error properties provided there is at least one regressor variate. Approximations to the percentiles of the unconditional distributions of pivotal random variables used for statistical inference for the parameters in the log-gamma regression model are proposed and evaluated for the normal error and Type I extreme-value error models. The results suggest that bias correction for the estimate of the scale parameter will be important in small samples for all densities in the log-gamma family.

Journal ArticleDOI
TL;DR: In this paper, a class of multifactor designs for estimating the slope of second-order response surfaces is presented, where the variance of the estimated slope at a point is a function of the direction of measurement of the slope and the design.
Abstract: A class of multifactor designs for estimating the slope of second-order response surfaces is presented For multifactor designs, the variance of the estimated slope at a point is a function of the direction of measurement of the slope and the design If we average the variance over all possible directions, the averaged variance is only a function of the point and the design By choice of design, it is possible to make this variance constant for all points equidistant from the design origin This property is called slope rotatability ouer all directions, and the necessary and sufficient conditions for a design to have this property are given and proved The class of designs with this property is discussed

Journal ArticleDOI
TL;DR: In this paper, the authors propose diagnostics to indicate cases influential for the transformation or regression parameters, and also propose a robust bounded-influence estimator similar to the Krasker-Welsch regression estimator.
Abstract: In regression analysis, the response is often transformed to remove heteroscedasticity and/or skewness. When a model already exists for the untransformed response, then it can be preserved by applying the same transform to both the model and the response. This methodology, which we call “transform both sides,” has been applied in several recent papers and appears highly useful in practice. When a parametric transformation family such as the power transformations is used, then the transformation can be estimated by maximum likelihood. The maximum likelihood estimator, however, is very sensitive to outliers. In this article, we propose diagnostics to indicate cases influential for the transformation or regression parameters. We also propose a robust bounded-influence estimator similar to the Krasker-Welsch regression estimator: