scispace - formally typeset
Search or ask a question

Showing papers in "Technometrics in 1986"


Journal ArticleDOI
TL;DR: In this paper, an introduction to multivariate statistical analysis is presented, with a focus on the use of multivariate statistics in the context of multinomial statistical analysis, and a discussion of its application in multivariate analysis.
Abstract: (1986). An Introduction to Multivariate Statistical Analysis. Technometrics: Vol. 28, No. 2, pp. 180-181.

932 citations


Journal ArticleDOI
TL;DR: In this article, the choice of control chart parameters (sample size, sampling interval, and control limits) is considered from an economic point of view, and a general process model is considered, and the hourly cost function is derived.
Abstract: The choice of control chart parameters—sample size, sampling interval, and control limits—is considered from an economic point of view. A general process model is considered, and the hourly cost function is derived. This cost function simplifies when the recorded statistics are independent. Numerical techniques to minimize this cost function are discussed, and sensitivity analyses are performed. An example illustrates the potential savings of this technique of designing control charts.

591 citations


Journal ArticleDOI
TL;DR: A generalized simulated annealing method has been developed and applied to the optimization of functions (possibly constrained) having many local extrema and it is used to solve a problem analyzed by Bates for which an improved optimum is identified.
Abstract: A generalized simulated annealing method has been developed and applied to the optimization of functions (possibly constrained) having many local extrema. The method is illustrated in some difftcult pedagogical examples and used to solve a problem analyzed by Bates (Technometrics, 25, pp. 373–376, 1983) for which we identify an improved optimum. The sensitivity of the solution to changes in the constraints and in other specifications of the problem is analyzed and discussed.

554 citations


Journal ArticleDOI
TL;DR: A more formal analysis is presented here, which may be used to supplement such plots and hence to facilitate the use of these unreplicated experimental arrangements.
Abstract: Loss of markets to Japan has recently caused attention to return to the enormous potential that experimental design possesses for the improvement of product design, for the improvement of the manufacturing process, and hence for improvement of overall product quality. In the screening stage of industrial experimentation it is frequently true that the “Pareto Principle” applies; that is, a large proportion of process variation is associated with a small proportion of the process variables. In such circumstances of “factor sparsity,” unreplicated fractional designs and other orthogonal arrays have frequently been effective when used as a screen for isolating preponderant factors. A useful graphical analysis due to Daniel (1959) employs normal probability plotting. A more formal analysis is presented here, which may be used to supplement such plots and hence to facilitate the use of these unreplicated experimental arrangements.

528 citations


Journal ArticleDOI
Jerry M. Mendel1

285 citations


Journal ArticleDOI
Josef Schmee1

282 citations


Journal ArticleDOI
TL;DR: In this article, the authors describe the methods used to forecast loads on public utility systems, featuring modern documentation and forecasting techniques for lead times of up to approximately 24 hours, including automatic adaptive, univariate, and multivariate.
Abstract: This reference describes the methods used to forecast loads on public utility systems, featuring modern documentation and forecasting techniques for lead times of up to approximately 24 hours. Methods include automatic adaptive, univariate, and multivariate. The volume is divided into three parts: an introduction to the general economic and operational decision-making contexts and methods of short-term forecasting; six contributed insights into a wide range of model developments in load forecasting; and further insights into adjacent fields, including essays on the econometric perspective to load forecasting and the very short-term implications of state-estimation for data validation.

252 citations


Journal ArticleDOI
TL;DR: The Weaknesses of the Economic Design of Control Charts as mentioned in this paper, a survey of the economic design of control charts, is a good starting point for this paper. But it is incomplete.
Abstract: (1986). Weaknesses of The Economic Design of Control Charts. Technometrics: Vol. 28, No. 4, pp. 408-409.

248 citations


Journal ArticleDOI
TL;DR: In this article, the authors present Artificial Intelligence and Statistics: AIA and Statistics, Vol. 31, No. 1, 1989, pp. 130-130, p. 130.
Abstract: (1989). Artificial Intelligence and Statistics. Technometrics: Vol. 31, No. 1, pp. 130-130.

245 citations


Journal ArticleDOI
TL;DR: The book review section generally accepts for review only those books whose content and level reflect the general editorial policy of Technometrics as discussed by the authors, which is a general policy of the American Institute of Technology.
Abstract: The book review section generally accepts for review only those books whose content and level reflect the general editorial policy of Technometrics. Publishers are invited to send books for review to Eric R. Ziegel, Amoco Research Center, P.O. Box 400, Naperville, IL 60566. Please include the price of the book. The opinions expressed in this section are those of the reviewers and do not necessarily reflect those of the editorial staff or the sponsoring societies. book review section gen rally accepts for review only those s whose content and lev l refl ct the general ditorial policy f

216 citations


Journal ArticleDOI
TL;DR: It is shown how it is sometimes possible to use unreplicated fractional designs to identify factors that affect variance in addition to those that affect the mean.
Abstract: A distinguishing feature of Japanese quality improvement techniques is an emphasis on the designing of quality into the product and into the process that makes the product. In particular, experimental design is used to discover conditions that minimize variance and appropriately control the mean level. The direct estimation of variance by replication at each of the design points, however, can be excessively expensive in experimental runs. In this article we show how it is sometimes possible to use unreplicated fractional designs to identify factors that affect variance in addition to those that affect the mean.

Journal ArticleDOI
TL;DR: In this article, the authors show that principal modes of variation consist of eigenfunctions of the process covariance function C(s, t) for continuous sample curves, and compare their results with principal components analysis of the same data.
Abstract: Analysis of a process with continuous sample curves can be carried out in a manner similar to principal components analysis of vector processes. By appropriate definition of a best linear model in the continuous case, we show that principal modes of variation consist of eigenfunctions of the process covariance function C(s, t). Procedures for estimation of these eigenfunctions from a finite sample of observed curves are given, and results are compared with principal components analysis of the same data.

Journal ArticleDOI
TL;DR: In this article, a two-sided cumulative sum quality control scheme was proposed and compared to a multivariate cumulative sum scheme with fast initial response features and a Markov chain approximation was used to calculate the average run lengths.
Abstract: A new two-sided cumulative sum quality control scheme is proposed. The new scheme was developed specifically to be generalized to a multivariate cumulative sum quality control scheme. The multivariate version will be examined in a subsequent paper; this article evaluates the univariate version. A comparison of the conventional two-sided cumulative sum scheme and the proposed scheme indicates that the new scheme has slightly better properties (ratio of on-aim to off-aim average run lengths) than the conventional scheme. Steady state average run lengths are discussed. The new scheme and the conventional two-sided cumulative sum scheme have equivalent steady state average run lengths. Methods for implementing the fast initial response feature for the new cumulative sum scheme are given. A comparison of average run lengths for the conventional and proposed schemes with fast initial response features is also favorable to the new scheme. A Markov chain approximation is used to calculate the average run lengths ...

Journal ArticleDOI
TL;DR: A coherent, unified set of statistical methods, based on ranks, for analyzing data resulting from various experimental designs through asymptotic efficiency and influence curves and tolerance values.
Abstract: A coherent, unified set of statistical methods, based on ranks, for analyzing data resulting from various experimental designs. Uses MINITAB, a statistical computing system for the implementation of the methods. Assesses the statistical and stability properties of the methods through asymptotic efficiency and influence curves and tolerance values. Includes exercises and problems.

Journal ArticleDOI
TL;DR: In this paper, a procedure for limiting the influence of these outliers on the estimates of the model parameters is described, where the model effects are estimated by augmenting the original observations with auxiliary observations that contain the prior information represented by the variances.
Abstract: Outliers may occur with respect to any of the random components in the mixed linear model. A procedure for limiting the influence of these outliers on the estimates of the model parameters is described. Given the variances or estimates of them, the model effects are estimated by augmenting the original observations with auxiliary observations that contain the prior information represented by the variances. Large residuals among either the original or the auxiliary observations are interpreted as outlying random errors or outlying random effects, as appropriate, and Winsorized. The robust estimation of the variances is obtained by modifying the defining equations for the restricted maximum likelihood estimates under normality along the lines of Huber's proposal 2. A numerical example illustrates the use of the methodology, both as a diagnostic and as an estimation tool.


Journal ArticleDOI
Vijayan N. Nair1
TL;DR: Taguchi's accumulation analysis method is shown to have reasonable power fordetecting important location effects; however, it is an unnecessarily complicated procedure for detecting dispersion effects.
Abstract: This article deals with some techniques for analyzing ordered categorical data from industrial experiments for quality improvement. Taguchi's accumulation analysis method is shown to have reasonable power fordetecting important location effects; however, it is an unnecessarily complicated procedure. For detecting dispersion effects, it need not even be as powerful as Pearson's chi-squared test. Instead two simple and easy to use scoring schemes are suggested for identifying the location and dispersion effects separately. The techniques are illustrated on data from an experiment to optimize the process of forming contact windows in complementary metal-oxide semiconductor circuits.

Journal ArticleDOI
TL;DR: This paper generalized the Morgenstern's distribution to include more useful dependence structures, which provided better fits to a uranium survey data set and demonstrated the effectiveness of Atkinson's graphical aid for discriminating separate models.
Abstract: A distribution developed previously (Cook and Johnson 1981) has been generalizedu sing Morgenstern's distribution to include more useful dependence structures. The new distribution provides better fits to a uranium survey data set. The effectiveness of Atkinson's (1982) graphical aid for discriminating separate models is also demonstrated in comparing the usual normal model with the new distribution having normal marginal distributions.

Journal ArticleDOI
TL;DR: In this article, the authors examined statistical inference for Pr(Y < X), where X and Y are independent normal variates with unknown means and variances, and the case of unequal variances is stressed.
Abstract: This article examines statistical inference for Pr(Y < X), where X and Y are independent normal variates with unknown means and variances. The case of unequal variances is stressed. X can be interpreted as the strength of a component subjected to a stress Y, and Pr(Y < X) is the component's reliability. Two approximate methods for obtaining confidence intervals and an approximate Bayesian probability interval are obtained. The actual coverage probabilities of these intervals are examined by simulation.

Journal ArticleDOI
TL;DR: A family of smoothing algorithms that can produce discontinuous output are introduced that can be used for smoothing with edge detection in image processing and applied to sea surface temperature data where the discontinuities arise from changes in ocean currents.
Abstract: We introduce a family of smoothing algorithms that can produce discontinuous output. Unlike most commonly used smoothers, that tend to blur discontinuities in the data, this smoother can be used for smoothing with edge detection. We cite examples of other approaches to (two-dimensional) smoothing with edge detection in image processing, and apply our one-dimensional smoother to sea surface temperature data where the discontinuities arise from changes in ocean currents.

Journal ArticleDOI
Colin L. Mallows1
TL;DR: In this paper, a simple way of encouraging a nonlinear effect in a regression model to show itself in a partial residual (parres) plot (component-plus-residual plot) is to include a single quadratic term in the regression.
Abstract: A simple way of encouraging a nonlinear effect in a regression model to show itself in a partial residual (parres) plot (component-plus-residual plot) is to include a single quadratic term in the regression. It is computationally cheap to do this for each of the independent variables. Examples show that the resulting set of augmented partial residual plots gives insights that are not available from standard parres or added-variable plots.

Journal ArticleDOI
TL;DR: The binomial failure rate model as mentioned in this paper can be used to estimate the rate of simultaneous failure of more than one component of a system, and it can also be used for failure detection.
Abstract: The binomial failure rate model can be used to estimate the rate of simultaneous failure of more than one component of a system. This review article defines the model, shows how quantities of interest are estimated, and presents checks for lack of fit. Emphasis is on the statistical and computational ideas rather than on the details of the method.

Journal ArticleDOI
TL;DR: Multivariate Statistical Methods as mentioned in this paper is a one-semester course in a business or applied statistics program and as such could be considered an introductory textbook in multivariate analysis, with numerical examples from the fields of business and economics.
Abstract: It was not too long ago when you could count all of the multivariate analysis textbooks on the fingers of one hand. With a combination of increased awareness of the need to treat multivariate problems correctly and the availability of software to do it, new books have been arriving fairly regularly on the scene; I now have the luxury of reviewing three at one time. As we shall see, these books are different from each other and to some extent are intended for different markets. Table 1 lists the proportion of each book that is allocated to each of a number of topics. (These do not total 100% because of rounding and some overlap of topics. They also do not include tables or references.) Multivariate Statistical Methods, by Karson, is intended for a one-semester course in a business or applied statistics program and as such could be considered an introductory textbook in multivariate analysis. The numerical examples are from the fields of business and economics. Many of these examples are worked in great detail with the neophyte in mind. The first two chapters are fairly short and are concerned with preliminaries, the first dealing with matrix algebra and the second with a review of univariate distributions and the concepts of estimation and hypothesis testing. The next two chapters take up distributions. The first of these sets the stage by discussing probability distributions and partial and multiple correlation coefficients. These are then used in a presentation of the multivariate normal distribution, including the distribution of quadratic forms. The second chapter deals with sampling distributions of the multivariate mean, covariance matrix, and correlation matrix, along with the maximum likelihood estimation of these quantities. Chapter 5 is the longest in the book and deals with tests of significance for means [including multivariate analysis of variance (MANOVA) and regression with multivariate response] and coEncyclopedia of Statistical Sciences: Volume 5 Samuel Kotz and Norman L. Johnson (editors) Josef Schmee



Journal ArticleDOI
TL;DR: The authors compared three biased estimation and four subset selection regression techniques to least squares in a large-scale simulation and found that neither biased estimation nor subset selection demonstrated a consistent superiority over the other, excluding stepwise and principal component regression, both of which performed poorly.
Abstract: This study compared three biased estimation and four subset selection regression techniques to least squares in a large-scale simulation. The parameters relevant to a comparison of the techniques involved were systematically varied over wide ranges. A parameter of importance not used in previous major simulations of subset techniques, the proportion of independent variables in the data that were superfluous, was included. The major result is that neither biased estimation nor subset selection demonstrated a consistent superiority over the other, excluding stepwise and principal component regression, both of which performed poorly.

Journal ArticleDOI
TL;DR: In this article, upper and lower bounds are derived for the distribution of the run length N of both the one-sided and two-sided CUSUM schemes based upon a sequence of iterates.
Abstract: Upper and lower bounds are derived for the distribution of the run length N of both the one-sided and two-sided CUSUM schemes. Based upon a sequence of iterates Pr(N > 0), Pr(N > 1), …, bounds are constructed such that (m n −) i Pr(N > n) ≤ Pr(N > n + i) ≤ (m n +) i Pr(N > n) holds for all n, i = 1, 2,…, with constants 0 ≤ m n −; ≤ m n + ≤ 1 suitably chosen. The bounds converge monotonically in the sense that (m n −) i+1Pr(N > n) ≤ (m n+1 −) i Pr(N > n + 1), (m n +) i+1Pr(N > n) ≥ (m n+1 +) i Pr(N > n + 1), and, under some mild and natural conditions, . As a by-product, bounds are presented for the percentage points of the distribution function of N, for the average run length of the inspection scheme, for the standard deviation of N, and finally, for the probability mass function of N. Some numerical results are displayed to demonstrate the effkiency of the method.