# Showing papers in "Technometrics in 1992"

••

Bell Labs

^{1}TL;DR: Zero-inflated Poisson (ZIP) regression as discussed by the authors is a model for counting data with excess zeros, which assumes that with probability p the only possible observation is 0, and with probability 1 − p, a Poisson(λ) random variable is observed.

Abstract: Zero-inflated Poisson (ZIP) regression is a model for count data with excess zeros. It assumes that with probability p the only possible observation is 0, and with probability 1 – p, a Poisson(λ) random variable is observed. For example, when manufacturing equipment is properly aligned, defects may be nearly impossible. But when it is misaligned, defects may occur according to a Poisson(λ) distribution. Both the probability p of the perfect, zero defect state and the mean number of defects λ in the imperfect state may depend on covariates. Sometimes p and λ are unrelated; other times p is a simple function of λ such as p = l/(1 + λ T ) for an unknown constant T . In either case, ZIP regression models are easy to fit. The maximum likelihood estimates (MLE's) are approximately normal in large samples, and confidence intervals can be constructed by inverting likelihood ratio tests or using the approximate normality of the MLE's. Simulations suggest that the confidence intervals based on likelihood ratio test...

3,205 citations

••

1,190 citations

••

TL;DR: In this article, a multivariate extension of the exponentially weighted moving average (EWMA) control chart is presented, and guidelines given for designing this easy-to-implement multivariate procedure.

Abstract: A multivariate extension of the exponentially weighted moving average (EWMA) control chart is presented, and guidelines given for designing this easy-to-implement multivariate procedure. A comparison shows that the average run length (ARL) performance of this chart is similar to that of multivariate cumulative sum (CUSUM) control charts in detecting a shift in the mean vector of a multivariate normal distribution. As with the Hotelling's χ2 and multivariate CUSUM charts, the ARL performance of the multivariate EWMA chart depends on the underlying mean vector and covariance matrix only through the value of the noncentrality parameter. Worst-case scenarios show that Hotelling's χ2 charts should always be used in conjunction with multivariate CUSUM and EWMA charts to avoid potential inertia problems. Examples are given to illustrate the use of the proposed procedure.

1,097 citations

••

TL;DR: This work model the output of the computer code as the realization of a stochastic process, allowing nonlinear and interaction effects to emerge without explicitly modeling such effects.

Abstract: Many scientific phenomena are now investigated by complex computer models or codes. Given the input values, the code produces one or more outputs via a complex mathematical model. Often the code is expensive to run, and it may be necessary to build a computationally cheaper predictor to enable, for example, optimization of the inputs. If there are many input factors, an initial step in building a predictor is identifying (screening) the active factors. We model the output of the computer code as the realization of a stochastic process. This model has a number of advantages. First, it provides a statistical basis, via the likelihood, for a stepwise algorithm to determine the important factors. Second, it is very flexible, allowing nonlinear and interaction effects to emerge without explicitly modeling such effects. Third, the same data are used for screening and building the predictor, so expensive runs are efficiently used. We illustrate the methodology with two examples, both having 20 input variables. I...

641 citations

••

University of Waterloo

^{1}, University of Wisconsin-Madison^{2}, National Institute of Standards and Technology^{3}, General Motors^{4}, DuPont^{5}, Virginia Tech^{6}, Imperial College London^{7}, Georgia Institute of Technology^{8}, Bell Labs^{9}TL;DR: A group of practitioners and researchers discuss the role of parameter design and Taguchi's methodology for implementing it and the importance of parameter-design principles with well-established statistical techniques.

Abstract: It is more than a decade since Genichi Taguchi's ideas on quality improvement were inrroduced in the United States. His parameter-design approach for reducing variation in products and processes has generated a great deal of interest among both quality practitioners and statisticians. The statistical techniques used by Taguchi to implement parameter design have been the subject of much debate, however, and there has been considerable research aimed at integrating the parameter-design principles with well-established statistical techniques. On the other hand, Taguchi and his colleagues feel that these research efforts by statisticians are misguided and reflect a lack of understanding of the engineering principles underlying Taguchi's methodology. This panel discussion provides a forum for a technical discussion of these diverse views. A group of practitioners and researchers discuss the role of parameter design and Taguchi's methodology for implementing it. The topics covered include the importance of vari...

633 citations

••

TL;DR: In this paper, the authors present a biostatistical introduction of the Time Series, a time series for time series, and a Biostatistic Introduction of time series.

Abstract: (1992). Time Series—A Biostatistical Introduction. Technometrics: Vol. 34, No. 2, pp. 229-230.

551 citations

••

TL;DR: Rationales for process monitoring using some of the techniques of statistical process control and for feedback adjustment using some techniques associated with automatic process control are explored, and issues that sometimes arise are discussed.

Abstract: Rationales for process monitoring using some of the techniques of statistical process control and for feedback adjustment using some techniques associated with automatic process control are explored, and issues that sometimes arise are discussed. The importance of some often unstated assumptions are illustrated. Minimum-cost feedback schemes are discussed for some simple, but practically interesting, models.

414 citations

••

TL;DR: In this paper, statistical analysis of reliability and life-testing models is performed for the first time, and the results show that the reliability of the models is significantly higher than that of the life testing models.

Abstract: (1992). Statistical Analysis of Reliability and Life-Testing Models. Technometrics: Vol. 34, No. 4, pp. 486-487.

352 citations

••

TL;DR: In this article, the authors consider a fatigue failure model in which accumulated decay is governed by a continuous Gaussian process W(y) whose distribution changes at certain stress change points to < t l < < < …

Abstract: Variable-stress accelerated life testing trials are experiments in which each of the units in a random sample of units of a product is run under increasingly severe conditions to get information quickly on its life distribution. We consider a fatigue failure model in which accumulated decay is governed by a continuous Gaussian process W(y) whose distribution changes at certain stress change points to < t l < < …

••

TL;DR: In this article, the Kalman Filter was used for forecasting, structural time series and Kalman filter was applied to the Structural Time Series (STS) in the context of time series forecasting.

Abstract: (1992). Forecasting, Structural Time Series and the Kalman Filter. Technometrics: Vol. 34, No. 4, pp. 496-497.

••

TL;DR: In this article, large deviation techniques in decision, simulation and estimation have been used in decision-making, simulation, and estimation of large-scale models of large numbers of parameters.

Abstract: (1992). Large Deviation Techniques in Decision, Simulation and Estimation. Technometrics: Vol. 34, No. 1, pp. 120-121.

••

TL;DR: This methodology seeks to exploit the strengths of both automatic control and statistical process control, two fields that have developed in relative isolation from one another.

Abstract: The goal of algorithmic statistical process control is to reduce predictable quality variations using feedback and feedforward techniques and then monitor the complete system to detect and remove unexpected root causes of variation. This methodology seeks to exploit the strengths of both automatic control and statistical process control (SPC), two fields that have developed in relative isolation from one another. Recent experience with the control and monitoring of intrinsic viscosity from a particular General Electric polymerization process has led to a better understanding of how SPC and feedback control can be united into a single system. Building on past work by MacGregor, Box, Astrom, and others, the article covers the application from statistical identification and modeling to implementing feedback control and final SPC monitoring. Operational and technical issues that arose are examined, and a general approach is outlined.

••

••

TL;DR: In this article, case-deletion diagnostics for detecting influential observations in mixed linear models are proposed for both fixed effects and variance components, and the methods are illustrated using examples.

Abstract: Mixed linear models arise in many areas of application. Standard estimation methods for mixed models are sensitive to bizarre observations. Such influential observations can completely distort an analysis and lead to inappropriate actions and conclusions. We develop case-deletion diagnostics for detecting influential observations in mixed linear models. Diagnostics for both fixed effects and variance components are proposed. Computational formulas are given that make the procedures feasible. The methods are illustrated using examples.

••

TL;DR: In this paper, a graph-aided method is proposed to solve the problem of fractional factorial factorial experiment planning, where prior knowledge may suggest that some interactions are potentially important and should therefore be estimated free of the main effects.

Abstract: In planning a fractional factorial experiment prior knowledge may suggest that some interactions are potentially important and should therefore be estimated free of the main effects. In this article, we propose a graph-aided method to solve this problem for two-level experiments. First, we choose the defining relations for a 2 n–k design according to a goodness criterion such as the minimum aberration criterion. Then we construct all of the nonisomorphic graphs that represent the solutions to the problem of simultaneous estimation of main effects and two-factor interactions for the given defining relations. In each graph a vertex represents a factor and an edge represents the interaction between the two factors. For the experiment planner, the job is simple: Draw a graph representing the specified interactions and compare it with the list of graphs obtained previously. Our approach is a substantial improvement over Taguchi's linear graphs.

••

TL;DR: The reference record was created on 2005-06-20, modified on 2016-08-08 as discussed by the authors, and was used by the Geostatistique and the statistique.

Abstract: Keywords: Geostatistique ; statistique ; amenagement Reference Record created on 2005-06-20, modified on 2016-08-08

••

TL;DR: The Plackett and Burman designs also have interesting projective properties, knowledge of which allows the experimenter to follow up an initial PLACKET and BurMAN design with runs that increase the initial resolution for the factors that appear to matter and thus permit efficient separation of effects of interest.

Abstract: The projection properties of the 2 R q–p fractional factorials are well known and have been used effectively in a number of published examples of experimental investigations. The Plackett and Burman designs also have interesting projective properties, knowledge of which allows the experimenter to follow up an initial Plackett and Burman design with runs that increase the initial resolution for the factors that appear to matter and thus permit efficient separation of effects of interest. Projections of designs into 2–5 dimensions are discussed, and the 12-run case is given in detail. A numerical example illustrates the practical uses of these projections.

••

TL;DR: Some bootstrap resampling methods are reviewed, emphasizing applications through illustrations with some real data, and special attention is given to regression, problems with dependentData, and choosing tuning parameters for optimal performance.

Abstract: Bootstrap resampling methods have emerged as powerful tools for constructing inferential procedures in modern statistical data analysis. Although these methods depend on the availability of fast, inexpensive computing, they offer the potential for highly accurate methods of inference. Moreover, they can even eliminate the need to impose a convenient statistical model that does not have a strong scientific basis. In this article, we review some bootstrap methods, emphasizing applications through illustrations with some real data. Special attention is given to regression, problems with dependent data, and choosing tuning parameters for optimal performance.

••

[...]

TL;DR: The authors examines log-linear models for contingency tables and uses previous knowledge of analysis of variance and regression to motivate and explicate the use of loglinear models, which can be used at both higher and lower levels.

Abstract: This book examines log-linear models for contingency tables It uses previous knowledge of analysis of variance and regression to motivate and explicate the use of log-linear models It is a textbook primarily directed at advanced Masters degree students in statistics but can be used at both higher and lower levels Outlines for introductory, intermediate and advanced courses are given in the preface All the fundamental statistics for analyzing data using log-linear models are given

••

TL;DR: In this article, the authors consider situations in which X and Y are independent and have normal distributions or can be transformed to normality and present a test statistics which are exact p values that are represented as one-dimensional integrals.

Abstract: We consider the stress-strength problem in which a unit of strength X is subjected to environmental stress Y. An important problem in stress-strength reliability concerns testing hypotheses about the reliability parameter R = P[X > yl. In this article, we consider situations in which X and Y are independent and have normal distributions or can be transformed to normality. We do not require the two population variances to be equal. Our approach leads to test statistics which are exact p values that are represented as one-dimensional integrals. On the basis of the p value, one can also construct approximate confidence intervals for the parameter of interest. We also present an extension of the testing procedure to the case in which both strength and stress depend on covariates. For comparative purposes, the Bayesian solution to the problem is also presented. We use data from a rocket-motor experiment to illustrate the procedure.

••

TL;DR: In this paper, several options of bivariate boxplot-type constructions are discussed, including both elliptic and asymmetric plots, and alternative constructions compared in terms of efficiency of the relevant parameters.

Abstract: The boxplot has proven to be a very useful tool for summarizing univariate data. Several options of bivariate boxplot-type constructions are discussed. These include both elliptic and asymmetric plots. An inner region contains 50% of the data, and a fence identifies potential outliers. Such a robust plot shows location, scale, correlation, and a resistant regression line. Alternative constructions are compared in terms of efficiency of the relevant parameters. Additional properties are given and recommendations made. Emphasis is given to the bivariate biweight M estimator. Several practical examples illustrate that standard least squares ellipsoids can give graphically misleading summaries.

••

••

TL;DR: An alternative procedure is developed to the smoothed linear fitting method of McDonald and Owen based on the detection of discontinuities by comparing, at any given position, three smooth fits.

Abstract: An alternative procedure is developed to the smoothed linear fitting method of McDonald and Owen. The procedure is based on the detection of discontinuities by comparing, at any given position, three smooth fits. Diagnostics are used to detect discontinuities in the regression function itself (edge detection) or in its first derivative (peak detection). An application in electron microscopy is discussed.

••

TL;DR: In this paper, the effects of near orthogonality on estimation efficiency and analysis are studied for 12, 18, 20, and 24 runs of factorial factorial experiments.

Abstract: In running a factorial experiment, it may be desirable to use an orthogonal array with different (mixed) numbers of factor levels. Because of the orthogonality requirement, such arrays may have a large run size. By slightly sacrificing the orthogonality requirement, we can obtain nearly orthogonal arrays with economic run size. Some general methods for constructing such arrays are given. For 12, 18, 20, and 24 runs, many orthogonal arrays and nearly orthogonal arrays with mixed levels are constructed and tabulated. Effects of near orthogonality on estimation efficiency and analysis are studied.