scispace - formally typeset
Search or ask a question

Showing papers in "Technometrics in 1988"


Journal ArticleDOI
TL;DR: In this paper, the design procedures and average run lengths for two mulativariater cumulative sum (CUSUM) quality-control procedures are presented, and compared with each other and with the multivariate Shewhart chart.
Abstract: This article presents the design procedures and average run lengths for two mulativariater cumulative sum (CUSUM) quality-control procedures. The first CUSUM procedure reduces each multivariate observation to a scalar and then forms a CUSUM of the scalars. The second CUSUM procedure forms a CUSUM vector directly from the observations. These two procedures are compared with each other and with the multivariate Shewhart chart. Other multivariate quality-control procedures are mentioned. Robustness, the fast initial response feature for CUSUM schemes, and combined Shewhart-CUSUM schemes are discussed.

635 citations


Journal ArticleDOI
TL;DR: An iterative procedure is proposed for detecting IO and AO in practice and for estimating the time series parameters in autoregressive-integrated-moving-average models in the presence of outliers.
Abstract: Outliers in time series can be regarded as being generated by dynamic intervention models at unknown time points. Two special cases, innovational outlier (IO) and additive outlier (AO), are studied in this article. The likelihood ratio criteria for testing the existence of outliers of both types, and the criteria for distinguishing between them are derived. An iterative procedure is proposed for detecting IO and AO in practice and for estimating the time series parameters in autoregressive-integrated-moving-average models in the presence of outliers. The powers of the procedure in detecting outliers are investigated by simulation experiments. The performance of the proposed procedure for estimating the autoregressive coefficient of a simple AR(l) model compares favorably with robust estimation procedures proposed in the literature. Two real examples are presented.

589 citations


Journal ArticleDOI
TL;DR: In this article, a more general transformation approach is introduced for other commonly met kinds of dependence between σ y and μ y (including no dependence), and a lambda plot is presented that uses the data to suggest an appropriate transformation.
Abstract: For the analysis of designed experiments, Taguchi uses performance criteria that he calls signal-to-noise (SN) ratios. Three such criteria are here denoted by SN T , SN L , and SN S . The criterion SN T was to be used in preference to the standard deviation for the problem of achieving, for some quality characteristic y, the smallest mean squared error about an operating target value. Leon, Shoemaker, and Kacker (1987) showed how SN T was appropriate to solve this problem only when σ y was proportional to μ y . On that assumption, the same result could be obtained more simply by conducting the analysis in terms of log y rather than y. A more general transformation approach is here introduced for other, commonly met kinds of dependence between σ y and μ y (including no dependence), and a lambda plot is presented that uses the data to suggest an appropriate transformation. The criteria SN L and SN S were for problems in which the objective was to make the response as large or as small as possible. It is arg...

495 citations


Journal ArticleDOI
TL;DR: The variable sampling interval (VSI) chart as discussed by the authors uses a short sampling interval if the sample is close to but not actually outside the control limits, and a long sampling interval for the sample if it is closer to target.
Abstract: The usual practice in using a control chart to monitor a process is to take samples from the process with fixed sampling intervals. This article considers the properties of the chart when the sampling interval between each pair of samples is not fixed but rather depends on what is observed in the first sample. The idea is that the time interval until the next sample should be short if a sample shows some indication of a change in the process and long if there is no indication of a change. The proposed variable sampling interval (VSI) chart uses a short sampling interval if is close to but not actually outside the control limits and a long sampling interval if is close to target. If is actually outside the control limits, then the chart signals in the same way as the standard fixed sampling interval (FSI) chart. Properties such as the average time to signal and the average number of samples to signal are evaluated. Comparisons between the FSI and the VSI charts indicate that the VSI chart is substantially ...

473 citations



Journal ArticleDOI
TL;DR: A graphical method is proposed to display a number of point estimates while allowing for their differing standard errors, and can be viewed as a representation of interval estimates by points on a bivariate plot.
Abstract: A graphical method is proposed to display a number of point estimates while allowing for their differing standard errors. More generally, it can be viewed as a representation of interval estimates by points on a bivariate plot. The method exploits a familiar connection between standardized estimates and regression through the origin and has several advantages over some alternative plots used in the literature. It is particularly useful when there may be a mixture of parameters, as illustrated by the problem of “mixed ages” in fission track dating.

317 citations


Journal ArticleDOI
TL;DR: In this article, the authors present an Interactive Multidimensional Data Display (IMD-9) for displaying multidimensional data on a desktop computer, which is a two-handed, flexible, and immediate control of a graphic display.
Abstract: Dynamic Graphics for Data Analysis, Richard A. Becker, William S. Cleveland, and Allan R. Wilks Some Approaches to Interactive Statistical Graphics. PRIM-9: an Interactive Multidimensional Data Display, D.F. Andrews, E.B. Fowlkes, and P.A. Tukey Kinematic Display of Multivariate Data, D.L. Donoho, P.J. Huber, E. Ramos, and H.M. Thoma An Introduction to Real Time Graphical Techniques for Analyzing Multivariate Data, Mary Anne Fisherkeller, Jerome H. Friedman, and John W. Tukey Control and Stash Philosophy for Two-Handed, Flexible, and Immediate Control of a Graphic Display, John W. Tukey Orion I: Interactive Graphics for Data Analysis, John Alan McDonald Brushing Scatterplots, Richard A. Becker and William S. Cleveland Plot Windows, Werner Stuetzle The Use of Brushing and Rotation for Data Analysis, Richard A. Becker, William S. Cleveland, and Gerald Weil Elements of a Viewing Pipeline for Data Analysis, Andreas Buja, Daniel Asminov, Catherine Hurley, and Joh A. McDonald EXPLOR4: A Program for Exploring Four-Dimensional Data Using Stereo-Ray Glyphs, Dimensional Constraints, Rotation, and Masking, D.B. Carr and W.L. Nicholson MACSPIN: Dynamic Graphics on a Desktop Computer, Andrew W. Donoho, David L. Donoho, and Miriam Gasko The Application of Depth Separation to the Display of Large Data Sets, Thomas V. Papathomas and Bela Julesz A High Performance Color Graphics Facility for Exploring Multivariate Data, C. Ming Wand and Harold W. Gugel Dynamic Graphics for Exploring Multivariate Data, Forrest W. Young, Douglas P. Kent, and Warren F. Kuhfeld

316 citations


Journal ArticleDOI

220 citations


Journal ArticleDOI
TL;DR: In this paper, a moditied version of Duncan's (1956) model for the economic design of -control charts is extended to deal with situations involving the Weibull shock model.
Abstract: In this article, a moditied version of Duncan's (1956) model for the economic design of -control charts is extended to deal with situations involving the Weibull shock model. In the traditional Duncan approach to Markovian shock models, the length of sampling intervals is kept constant. When the process-failure mechanism follows a Weibull model or some other model having an increasing hazard rate. however, it may be desirable to have the frequency of sampling increased with the age of the system. This study proposes a cost model that uses variable sampling intervals as opposed to sampling intervals of fixed length. The computational results indicate that the proposed model under variable sampling intervals provides a lower cost than those obtained by using the existing model developed by Hu (1984) for a Welbull shock model having sampling intervals of fixed length.

206 citations


Journal ArticleDOI
TL;DR: In this paper, the authors discuss the reasons why standard designs may not be suitable for some variables, such as quantitative and qualitative in a response-surface context, and some alternative designs are discussed.
Abstract: When some variables are quantitative and some qualitative in a response-surface context, standard designs may not be suitable. The reasons for this are illuminated. Some alternative designs are discussed. Every n-point design provides n linearly independent estimation contrasts. Some of these, p say, are needed to estimate the p parameters of the postulated model. The remaining n — p linearly independent estimation contrasts are available to estimate pure error (if used) and to test for lack of fit, either overall or in particular ways. The key to choosing a good design is to use the available degrees of freedom well, given certain assumptions about the model to be fitted. When there is also uncertainty about the model assumptions, dogmatic design advice is not possible. Sound guidelines are available, however, and these are presented and illustrated.

164 citations


Journal ArticleDOI
TL;DR: In this paper, the authors show that Taguchi's method, although giving good results for many applications, is not optimal and propose alternative tolerancing procedures that are uniformly better than Taguchi method with little sacrifice in simplicity.
Abstract: The expanding use of experimental design techniques for statistical tolerancing is primarily due to their simplicity. They can be understood easily and implemented by engineers and scientists having only a limited knowledge of statistics and experimental design. The method is generally attributed to Taguchi (1978). In this article. we describe Taguchi's method and why it works. both intuitively and mathematically. Our results show that Taguchi's method, although giving good results for many applications, is not optimal. We propose alternative tolerancing procedures that are uniformly better than Taguchi's method with little sacrifice in simplicity. We illustrate the use of these methods. first with some simple examples and then with a partial-differential-equation model.

Reference BookDOI
TL;DR: Linear Least Squares Computations as mentioned in this paper is an excellent reference for industrial and applied mathematicians, statisticians, and econometricians, as well as atext for advanced undergraduate and graduate statistics, mathematics, and economics courses in computer programming, linear regression analysis, and applied statistics.
Abstract: Presenting numerous algorithms in a simple algebraic form so that the reader can easilytranslate them into any computer language, this volume gives details of several methodsfor obtaining accurate least squares estimates. It explains how these estimates may beupdated as new information becomes available and how to test linear hypotheses.Linear Least Squares Computations features many structured exercises that guidethe reader through the available algorithms, plus a glossary of commonly used terms anda bibliography of supplementary reading ... collects "ancient" and modem results onlinear least squares computations in a convenient single source . . . develops the necessarymatrix algebra in the context of multivariate statistics . .. only makes peripheral use ofconcepts such as eigenvalues and partial differentiation .. . interprets canonical formsemployed in computation ... discusses many variants of the Gauss, Laplace-Schmidt,Givens, and Householder algorithms ... and uses an empirical approach for the appraisalof algorithms.Linear Least Squares Computations serves as an outstanding reference forindustrial and applied mathematicians, statisticians, and econometricians, as well as atext for advanced undergraduate and graduate statistics, mathematics, and econometricscourses in computer programming, linear regression analysis, and applied statistics.


Journal ArticleDOI
TL;DR: In this paper, the authors show that both of these methods can be obtained as special cases of maximum likelihood estimation under normal theory and recommend that the parameters of the identified submodel be estimated by maximum likelihood.
Abstract: Recent developments in quality engineering methods have led to considerable interest in the analysis of dispersion effects from designed experiments. A commonly used method for identifying important dispersion effects from replicated experiments is based on least squares analysis of the logarithm of the within-replication variance (Bartlett and Kendall 1946). Box and Meyer (1986) introduced a pooling technique for unreplicated two-level experiments. We extend this to replicated two-level experiments and compare its performance with the least squares analysis. We show that both of these methods can be obtained as special cases of maximum likelihood estimation under normal theory. The pooling technique is generally biased and is not recommended for model identification. The least squares analysis performs well as a model identification tool, but the estimators can be inefficient. In such cases we recommend that the parameters of the identified submodel be estimated by maximum likelihood. We derive some prop...

Journal ArticleDOI
TL;DR: In this article, the authors present the COMPSTAT: Proceedings in Computational Statistics (PCS): Vol. 30, No. 2, pp. 245-246, 1988.
Abstract: (1988). COMPSTAT: Proceedings in Computational Statistics. Technometrics: Vol. 30, No. 2, pp. 245-246.

Journal ArticleDOI
TL;DR: In this article, several parameter estimation methods for dealing with heteroscedasticity in nonlinear regression are described, including variations on ordinary, weighted, iteratively reweighted, extended and generalized least squares.
Abstract: Several parameter estimation methods for dealing with heteroscedasticity in nonlinear regression are described. These include variations on ordinary, weighted, iteratively reweighted, extended. and generalized least squares. Some of these variations are new, and one of them in particular, modified extended iteratively reweighted least squares (MEIRLS), allows parameters of an assumed heteroscedastic variance model to be estimated with an adjustment for bias due to estimation of the regression parameters. The context of the discussion is primarily that of pharmacokinetic-type data, although an example is given involving chemical-reaction data. Using simulated data from 21 heteroscedastic pharmacokinetic-type models, some of the methods are compared in terms of mean absolute error and 95% confidence-interval coverage. From these comparisons, MEIRLS and the variations on generalized least squares emerge as the methods of choice.

Journal ArticleDOI
TL;DR: In this paper, a Bayesian procedure is presented for estimating the reliability of a series system of independent binomial subsystems and components, and the posterior distribution of the overall missile-system reliability from which the required estimates are obtained is computed.
Abstract: A Bayesian procedure is presented for estimating the reliability of a series system of independent binomial subsystems and components. The method considers either test or prior data (perhaps both or neither) at the system, subsystem, and component level. Beta prior distributions are assumed throughout. Inconsistent prior judgments are averaged within the simple-to-use procedure. The method is motivated by the following practical problem. It is required to estimate the overall reliability of a certain air-to-air heat-seeking missile system containing five major subsystems with up to nine components per subsystem. The posterior distribution of the overall missile-system reliability from which the required estimates are obtained is computed.


Journal ArticleDOI
TL;DR: Likelihood-based methods for the analysis of field-performance studies with particular attention centered on the estimation of regression coefficients in parametric models are developed in this article, where general methods are outlined and specific formulas for various likelihoodbased methods are obtained when the failure-time model is exponential or Weibull.
Abstract: Likelihood-based methods are developed for the analysis of field-performance studies with particular attention centered on the estimation of regression coefficients in parametric models. Failure-record data are those in which the time to failure and the regressor variables are observed only for those items that fail in some prespecified follow-up or warranty period (0, T]. It is noted that for satisfactory inference about baseline failure rates or regression effects it is usually necessary to supplement the failure-record data either by incorporating specific prior information about x or by taking a supplementary sample of items that survive T ○. General methods are outlined and specific formulas for various likelihood-based methods are obtained when the failure-time model is exponential or Weibull. In these models the methods are compared with respect to asymptotic efftciency of estimation. Several extensions to more complicated sampling plans are considered.

Journal ArticleDOI
TL;DR: In this paper, the authors show that when ridge regression is used to mitigate the effects of collinearity, the influence of some observations can be drastically modifield, and propose approximate deletion formulas for the detection of influential points.
Abstract: In regression, it is of interest to detect anomalous observations that exert an unduly large influence on the least squares analysis. Frequently, the existence of influential data is complicated by the presence of collinearity (see, e.g., Lawrence and Marsh 1984). Very little work has been done, however, on the possible effects that collinearity can have on the influence of an observation. In this article, we show that when ridge regression is used to mitigate the effects of collinearity, the influence of some observations can be drastically modifield. Approximate deletion formulas for the detection of influential points are proposed for ridge regression.


Journal ArticleDOI
TL;DR: In this article, a measure that quantifies the amount of rotatability in a given response surface design is introduced, which is expressible as a percentage, and can be used to repair a non-rotatable design by the addition of experimental runs that maximize the percent rotatio over a spherical region of interest.
Abstract: A measure that quantifies the amount of rotatability in a given response-surface design is introduced in this article. The measure, which is expressible as a percentage, takes the value 100 if and only if the design is rotatable. One of the main advantages of this measure is that it can be used to “repair” a nonrotatable design by the addition of experimental runs that maximize the percent rotatability over a spherical region of interest. Four numerical examples are given to illustrate the applications of this measure.




Journal ArticleDOI
TL;DR: In this paper, a Second Course in Statistics (S2CIS) is presented, where regression is applied to the second-order regression problem in the context of statistical regression.
Abstract: (1988). Regression: A Second Course in Statistics. Technometrics: Vol. 30, No. 1, pp. 130-131.

Journal ArticleDOI
TL;DR: In this paper, an index of rotatability for symmetrical second-order designs is proposed, which will enable the experimenter to obtain an immediate appreciation of the overall shape of specified variance contours.
Abstract: An index of rotatability is suggested that will enable the experimenter to obtain an immediate appreciation of the overall shape of specified variance contours for symmetrical second-order designs. Values of the index are tabulated for the central composite designs for two to eight factors. Three designs of Roquemore (1976) are assessed via the index. Comparisons are made with an entirely different index suggested by Khuri (1988). It is concluded that both indexes are useful and sensibly consistent.

Journal ArticleDOI
TL;DR: In this paper, a renewal-proces model that accommodates both heterogeneity across units and decreasing hazard rates was developed to estimate the risk following repair of a repairable machine, and the model was illustrated using both Proschan's air-conditioner data and data on a U.S. Navy radar.
Abstract: Different copies of a repairable machine, or units, often exhibit different failure rates. If this heterogeneity is ignored, a statistical model of the time until failure may estimate a spurious decreasing hazard rate, resulting in incorrect inferences concerning the risk following repair. This article develops a renewal-proces model that accommodates both heterogeneity across units and decreasing hazard rates. Failure times for each unit are assumed Weibull, and the Weibull scale parameter is assumed to vary across units according to a gamma distribution. The model is illustrated using both Proschan's air-conditioner data and data on a U.S. Navy radar.

Journal ArticleDOI
TL;DR: In this paper, a lower confidence bound is obtained for Pr(Y > X|z 1, z 2), where X and Y are independent normal variables, with explanatory variables z 1 and z 2, respectively.
Abstract: A lower confidence bound is obtained for Pr(Y > X|z 1, z 2), where X and Y are independent normal variables, with explanatory variables z 1 and z 2, respectively. For equal residual variances, an exact solution is obtained, but for the unequal variance case, an approximate lower confidence bound is developed. Examples of the use of these procedures are given.