scispace - formally typeset
Search or ask a question

Showing papers in "Technometrics in 1969"


Journal ArticleDOI
TL;DR: In this paper, a procedure for determining statistically whether the highest observation, lowest observation, highest and lowest observations, or more of the observations in the sample are statistical outliers is given.
Abstract: Procedures are given for determining statistically whether the highest observation, the lowest observation, the highest and lowest observations, the two highest observations, the two lowest observations, or more of the observations in the sample are statistical outliers. Both the statistical formulae and the application of the procedures to examples are given, thus representing a rather complete treatment of tests for outliers in single samples. This paper has been prepared primarily as an expository and tutorial article on the problem of detecting outlying observations in much experimental work. We cover only tests of significance in thii paper.

3,551 citations


Journal ArticleDOI
Robert W. Kennard1, L. A. Stone1
TL;DR: A computer oriented method which assists in the construction of response surface type experimental plans takes into account constraints met in practice that standard procedures do not consider explicitly.
Abstract: A computer oriented method which assists in the construction of response surface type experimental plans is described. It takes into account constraints met in practice that standard procedures do not consider explicitly. The method is a sequential one and each step covers the experimental region uniformly. Applications to well-known situations are given to demonstrate the reasonableness of the procedure. Application to a ‘messy” design situation is given to demonstrate its novelty.

2,667 citations


Journal ArticleDOI
TL;DR: The problems of estimation and testing hypotheses regarding the parameters in the Weibull distribution are considered in this paper, where exact confidence intervals for the parameters based upon maximum likelihood estimators are presented.
Abstract: The problems of estimation and testing hypotheses regarding the parameters in the Weibull distribution are considered in this paper. The following results are given: 1. Exact confidence intervals for the parameters based upon maximum likelihood estimators are presented. 2. A table of unbiasing factors (depending upon sample size) for the maximum likelihood estimator of the shape parameter is given. 3. Tests of hypotheses regarding the parameters and the power of the test regarding the shape parameter are developed and presented. 4. Sample sizes at which large sample theory may be useful are presented.

402 citations


Journal ArticleDOI
TL;DR: In this paper, the numerical technique of the maximum likelihood method to estimate the parameters of Gamma distribution is examined and the bias of the estimates is investigated numerically, the empirical result indicates that the bias bias of both parameter estimates produced by the maximum-likelihood method is positive.
Abstract: The numerical technique of the maximum likelihood method to estimate the parameters of Gamma distribution is examined. A convenient table is obtained to facilitate the maximum likelihood estimation of the parameters and the estimates of the variance-covariance matrix. The bias of the estimates is investigated numerically. The empirical result indicates that the bias of both parameter estimates produced by the maximum likelihood method is positive.

271 citations


Journal ArticleDOI
TL;DR: In this paper, a number of formulae expressing E(i, n) in terms of M(j), j ≤ n, are developed for order statistics of a random sample of size n.
Abstract: Let X 1n ≤ X 2n ≤,…, ≤ Xnn be the order statistics of a random sample of size n. For any integrable function g(x) define E(i, n) = E(g(X in )) and M(n) = E(1, n) = E(g(X 1n )). A number of formulae expressing E(i, n) in terms of M(j), j ≤ n, are developed. For example, These results are applied to obtain the means and variances of the order statistics of a log-Weibull distribution (F(z) = 1 – exp (− exp x)). Tables of these means and variances are given for 1 ≤ i ≤ n, n = 1 (1) 50 (5) 100. The computations were made using a set of 100 decimal place logarithms of integers. Examples of the use of these tables in obtaining weighted least squares estimates from censored samples from a Weibull distribution are also given.

163 citations


Journal ArticleDOI
I. J. Good1
TL;DR: The singular decomposition of a matrix has a variety of uses, especially in statistics as mentioned in this paper, although it is seldom mentioned in books on either matrices or statistics, although some applications are surveyed and some new ones given.
Abstract: It is emphasized that the singular decomposition of a matrix has a variety of uses, especially in statistics, although it is seldom mentioned in books on either matrices or statistics. Some applications are surveyed and some new ones are given.

143 citations


Journal ArticleDOI

136 citations


Journal ArticleDOI
TL;DR: This paper considers the particular problem of transforming data from the viewpoint of several aspects of a problem or several criteria by re-examining some specific published examples.
Abstract: Modern computing equipment is extremely fast and can also provide graphical out-put. Prior to the computer era, problems were often formulated in terms of a single numerical criterion which could be handled conveniently on a desk calculator. Now several aspects of a problem or several criteria can be considered simultaneously and a more flexible attitude adopted. The situation then can often be easily understood by the experimenter, and compromise decisions can be made by him. In this paper we consider the particular problem of transforming data from this viewpoint by re-examining some specific published examples.

110 citations


Journal ArticleDOI
TL;DR: In this article, the authors introduce the concept of stochastic processes in biostatistics and present an introduction to Stochastic Processes in Biometrics (SPBP).
Abstract: (1969). Introduction to Stochastic Processes in Biostatistics. Technometrics: Vol. 11, No. 4, pp. 837-838.

103 citations


Journal ArticleDOI
TL;DR: In this article, the authors present a method to obtain a response surface estimator that minimizes the integrated variance of the fitted equation by choice of design, subject to the assumption that the bias is due to specified higher order terms which may be in the model but are omitted from the fitting equation.
Abstract: Response surface estimators are obtained which first minimize integrated squared bias. The bias is due to specified higher order terms which may be in the model but are omitted from the fitted equation. The estimator, subject to achieving this minimum bias, minimizes the integrated variance; integration for both bias and variance being over a specified region of interest. Since the minimum integrated squared bias is attained for any design, other criteria may be satisfied by choice of design. One form of design flexibility which can be achieved is the minimization of the integrated variance of the fitted equation by choice of design. Illustrations of the application of this criterion are given for certain simple model and design settings.

102 citations


Journal ArticleDOI
TL;DR: In this article, a note on regression methods in calibration is given, with a discussion of regression methods for calculating the parameters of a Calibration model with respect to a given set of parameters.
Abstract: (1969). A Note on Regression Methods in Calibration. Technometrics: Vol. 11, No. 1, pp. 189-192.


Journal ArticleDOI
TL;DR: In this article, the Inverse Method of calibration is compared to the Classical Method by a Monte Carlo technique and found to have a uniformly smaller average squared error in the range of the controlled variable.
Abstract: In an earlier paper (Krutchkoff, 1967) the Inverse Method of calibration is compared to the Classical Method by a Monte Carlo technique and found to have a uniformly smaller average squared error in the range of the controlled variable. This note presents some results obtained when using these procedures for extrapolation. Situations are shown to exist in extrapolation in which the Classical Method is superior to the Inverse Method.

Journal ArticleDOI
TL;DR: In this paper, the authors investigated the validity of Satterthwaite's procedure when some MSi are subtracted Due to the mathematical complexity of the distribution of MS, the problem is studied by a computer simulation.
Abstract: In calculating the variance of a mean or in constructing an approximate F-test, it is frequently necessary, particularly with unbalanced data, to form a linear function of mean squares, MS = Σa i, where the ai are known constants Satterthwaite [1] suggests that MS is approximately distributed as x 2 f E(MS)/ft where the degrees of freedom is estimated by, f = (MS)2/Σ[(aiMSi )2/fi ], where fi is the degrees of freedom associated with MSi Satterthwaite remarks that caution should be used in applying this formula where some of the ai are negative The primary purpose of this paper is to investigate the validity of Satterthwaite's procedure when some MSi are subtracted Due to the mathematical complexity of the distribution of MS, the problem is studied by a computer simulation

Journal ArticleDOI
TL;DR: The U. S. Water Resources Council has proposed standardization of the analysis of peak flood discharges by fitting a Pearson Type III distribution to the logarithms of the data.
Abstract: Recently the U. S. Water Resources Council has proposed standardization of the analysis of peak flood discharges by fitting a Pearson Type III distribution to the logarithms of the data. This action has served to draw attention to the inadequacy of available tables of percentage points of the Pearson Type III distribution and the need for better tables. Many tables of percentage points of the related chi-square distribution are available in the literature, perhaps the most comprehensive being those published by the author in 1964. These could be used to obtain percentage points of the Pearson Type III distribution, but it would be much more convenient to have a table from which percentage points of the latter distribution could be read directly for uniformly spaced values of the skewness coefficient. The author has therefore, by a modification of the programs used to compute his 1964 tables of percentage points of the chi-square distribution, obtained percentage points, corresponding to cumulative probabi...

Journal ArticleDOI
TL;DR: In this paper, a test based on maximum likelihood estimators is given for testing the equality of the shape parameters in two Weibull distributions with the scale parameters unknown, along with a procedure for selecting the process with the larger mean life.
Abstract: A test based on maximum likelihood estimators is given for testing the equality of the shape parameters in two Weibull distributions with the scale parameters unknown. Tests for the equality of the scale parameters are also presented along with a procedure for selecting the Weibull process with the larger mean life.

Journal ArticleDOI
TL;DR: In this article, a process for producing a thermoplastic resin sheet having a band of color by feeding a molten thermoplastastic resin by an extruder into a sheet-forming flat die having a manifold and extruding the associated streams from the extrusion opening of said flat die is described.
Abstract: A process for producing a thermoplastic resin sheet having a band of color by feeding a molten thermoplastic resin by an extruder into a sheet-forming flat die having a manifold and extruding said molten resin from an extrusion opening of said flat die, which comprises forcing a stream of a colored molten thermoplastic resin through at least one injection port opened into the manifold of the flat die into a main stream of the molten resin extruded from the extruder and fed to the manifold, associating the main stream of the molten thermoplastic resin and the stream of the colored molten thermoplastic resin within the manifold, and extruding the associated streams from the extrusion opening of said flat die.

Journal ArticleDOI
TL;DR: In this paper, the performance of three rules for dealing with outliers in small samples of size n from the normal distribution N(μ, σ2) is investigated when the primary objective of sampling is to obtain an accurate estimate of μ.
Abstract: The performance of three rules for dealing with outliers in small samples of size n from the normal distribution N(μ, σ2) are investigated when the primary objective of sampling is to obtain an accurate estimate of μ. It is assumed that at most one observation in the sample may be biased, arising from either N(μ + aσ, σ2) or N(r, (1 + b)σ2). Performance of each rule is measured in terms of “Protection”, the fractional decrease in the Mean Square Error (MSE) obtained by using the rule when a biased observation actually is present in the sample. Although numerical results have been obtained for n ≤ 10 when σ2 is known, computational difficulties have prevented evaluation of protections when σ2 is unknown except when n = 3.

Journal ArticleDOI
TL;DR: In this article, it was shown that if a minimal resolution IV 2 n design has each factor occurring equally often at its high and low levels, then the design must be a fold-over design.
Abstract: Factorial designs of resolution III are such that all main effects are estimable, ignoring two-factor interactions and all higher order interactions. Designs of resolution IV are such that all main effects are estimable with no two-factor interactions as aliases, ignoring all higher order interactions. The general technique for producing 2 n designs of resolution IV is a specific application of the Box-Wilson “fold-over” theorem. Recent work by Steve Webb on resolution IV designs for two-level factors is discussed and extended. The miniium run requirement for a 2 n resolution IV design is 2 n . It is proved that if a minimal resolution IV 2 n design has each factor occurring equally often at its high and low levels, then the design must be a fold-over design. A proof that the minimum run requirement for a resolution IV 2 n 3 m design, m > 0, is 3(n + 2m − 1) also is included. Minimal 2 n resolution IV designs are presented for various values of n. All these designs can be run in n blocks of size 2 each.

Journal ArticleDOI
TL;DR: In this paper, an algorithm involving polynomial approximations for evaluation of the normal distribution function is presented which may be implemented in fast and accurate computer programs of moderate length.
Abstract: An algorithm involving polynomial approximations for evaluation of the normal distribution function is presented which may be implemented in fast and accurate computer programs of moderate length. Using this approximation and simple two-step Newton-Raphson iteration, the evaluation of the inverse normal distribution function is achieved with reasonable accuracy.

Journal ArticleDOI
TL;DR: In this article, the authors focus on calculating the probability that a point target is destroyed by one or more weapons in a salvo, where the target, has an extended area, and the probability of destruction is replaced by the expected fraction of the target destroyed.
Abstract: At first glance the subject-matter of this paper may appear to be rather trivial. No questions of offense or defense strategies are involved; one is interested solely in calculating the probability that a point target is destroyed by one or more weapons in a salvo. If the target, has an extended area, the probability of destruction is replaced by the expected fraction of the target destroyed. One might reasonably conclude that a few simple mathematical arguments involving independent random events are all that is required. However, appearances are deceptive. Since the second world war a large number of authors have dealt with problems of this type and the results of their researches are widely scattered through the mathematical literature under the general name of coverage problems. A few answers can be obtained in closed form, but the majority run into diffkulties which can be overcome only by numerical integration or simulation. This paper attempts to classify these researches into a more-or-less logica...

Journal ArticleDOI
TL;DR: In this article, a sequence of fractional replicate plans that contain distinct treatment combinations can be identified by a single family of defining contrasts, the particular family to which a defining contrast belongs determines the alias or confounding pattern in the first block of the sequence.
Abstract: when investigating the effects of a number of factors in an industrial experiment is to partition the treatment combinations into blocks, confounding the higher order interaction effects with the block effects, and to evaluate each block in succession as soon as it is completed. This same suggestion was reiterated in the text edited by Davies (1954). Daniel (1957, 1962) and John (1965) have also discussed the use of sequences of some two-level fractional factorial plans. The main reason for running a sequence of blocks and evaluating the sequence after each block of treatment combinations has been run is to discover large effects quickly and hence terminate the experiment as soon as conclusive results are obtained. The additional blocks are added to the sequence in order to obtain estimates of additional parameters as well as new estimates of previously estimable parameters and to decrease the variance of these estimates. When a full factorial plan is partitioned into blocks, each block is a fractional replicate of the full factorial plan. Thus, a sequence of the blocks that make up a full factorial plan can be considered to be a sequence of fractional replicate plans, the treatment combinations of each being distinct. A complete sequence of fractional replicate plans that contain distinct treatment combinations can be identified by a single family of defining contrasts. The particular family to which a defining contrast belongs determines the alias or confounding pattern in the first block of the sequence. The order in which the individual members of the family of defining contrasts appear in the sequence determines the manner in which the aliased effects become individually estimable. To illustrate the notion of a single family of defining contrasts, consider the

Journal ArticleDOI
TL;DR: In this article, some general remarks on consulting in statistics are made, including a discussion of the role of statistical consulting in statistical analysis, and some general observations about the generalization of statistics.
Abstract: (1969). Some General Remarks on Consulting in Statistics. Technometrics: Vol. 11, No. 2, pp. 241-245.

Journal ArticleDOI
TL;DR: In this paper, a sampling plan for truncated life tests based on the exponential, normal, lognormal, gamma, Weibull, etc., distributions is presented, as well as disbribution-free life test plans based on increasing or decreasing failure rate distributions.
Abstract: Acceptance sampling plans based on the normal distribution have been available since 1955 and before. Yet reports of potential users indicate a general lack of enthusiasm for their application. There is the uncertainty of the assumption of the normal distribution, but the difficulties users have encountered are attributable more to the translation from the standardized deviate to proportion defective than with the probabilities involved. Some possible ways of adjusting for this are discussed. Sampling plans for truncated life tests based on the exponential, normal, lognormal, gamma, Weibull, etc., distributions are also available, as are disbribution-free life test plans based on increasing or decreasing failure rate distributions. These are extremely useful and further extensions of these ideas are in the offing. For the normal distribution, plans have been devised which control each tail of the distribution to separate levels. These plans are useful in the very common situation where defectiveness measu...

Book ChapterDOI
TL;DR: In this paper, a general framework is set in terms of a probabilistic model for the distribution of a discrete number of particle sizes with prescribed shapes which are sampled and observed by processes capable of being modeled.
Abstract: This paper treats the unfolding problem of estimating the density distribution of particles dispersed in a three-dimensional specimen. A general framework is set in terms of a probabilistic model for the distribution of a discrete number of particle sizes with prescribed shapes which are sampled and observed by processes capable of being modeled. The general formulation allows density estimates and standard deviation estimates for each particle size in the distribution to be made with data from observational processes that may distort or truncate the sampled information. The method is related to the earlier works on unfolding or estimating the density distribution of spherical particles sectioned by planar probes. The method is also used to develop a more accurate estimate for the density distribution of spherical voids where the observational process is the indirect microscopy of a replicated surface.


Journal ArticleDOI
TL;DR: In this paper, the authors give procedures and tables for evaluating the operating characteristic curves and associated measures of dependent mixed acceptance sampling plans for the case of single specification limit and known standard deviation, assuming a normal distribution.
Abstract: This paper gives procedures and tables for evaluating the operating characteristic curves and associated measures of dependent mixed acceptance sampling plans for the case of single specification limit and known standard deviation, assuming a normal distribution. Joint probabilities necessary for evaluating these measures are derived and methods to facilitate their computation are provided. A useful generalized dependent plan is also presented, using two attributes acceptance numbers rather than one. Tables of joint probabilities necessary for evaluation of mixed plans are presented for first sample sizes of 4, 5, 8, and 10, acceptance numbers of 0, 1, and 2 and various percentages defective.

Journal ArticleDOI
TL;DR: In this article, the authors present a table of the values of the percentage points of this Inverse Gaussian distribution with parameter t for values of t ranging from 0.1 to 4000.
Abstract: where the parameters ,u > 0 and X > 0 are given by u, = E(X) and X = A.3[Var (X)]-'. Some applications of this distribution and techniques of parametric estimation have been presented by Roy and Wasan [3], and recently Wasan [6] described some of the properties of an Inverse Gaussian process with ,u = t and X = t2. In this paper we present a table of the values of the percentage points of this Inverse Gaussian distribution with parameter t for values of t ranging from 0.1 to 4000. The cases of t > 4000 and t < 0.1 are also considered. We will firstly state some interesting properties which we shall find useful in our discussion of the tables.

Journal ArticleDOI
TL;DR: In this article, two randomization tests of the null hypothesis in cloud seeding experiments are compared: the Wilcoxon-Mann-Whitney test and a test based on an average ratio of seeded to non-seeded amounts of precipitation.
Abstract: Two randomization tests of the null hypothesis in cloud seeding experiments are compared–the Wilcoxon-Mann-Whitney test and a test based on an average ratio of seeded to non-seeded amounts of precipitation. Data from the Israeli experiment suggest that the latter test is relatively more sensitive to apparent effects of seeding. The significance level of this test may be estimated by Monte Carlo methods or approximated by using the asymptotic Normal distribution of the average ratio. Sampling trials show that this approximation is adequate only when the experiment is of several years' duration.

Journal ArticleDOI
TL;DR: In this paper, the authors introduced chain pooling, a procedure that, tests for differences in treatment effects, and showed that a large portion of mean squares (including the higher-order interactions) may contain real effects.
Abstract: In many experimental situations, a large portion of mean squares (including the higher-order interactions) may contain real effects. This paper introduces chain pooling, a procedure that, tests for differences in treatment effects. The operating characteristics for various strategies of chain pooling were investigated by Monte Carlo methods for designs of 24 treatment combinations. These computations were performed with real effects whose magnitudes were distributed in a manner unfavorable to chain pooling. The actual type 1 and type 2 error probabilities depend on the magnitudes of the real effects. A method is given for estimating weighted average error probabilities after the real effects are estimated. The procedures are illustrated by an example.