# Showing papers in "Technometrics in 1961"

••

TL;DR: The experimental scientist is frequently faced with the task of determining the functional relation between a response, y and a number of inputs, x, x2,. * *, x, with the help of empirical data.

Abstract: The experimental scientist is frequently faced with the task of determining the functional relation between a 'response', y and a number of 'inputs', x , x2, . * * , x, with the help of empirical data. Often the mathematical form of the functional relation is assumed to be known and is written in the form of a 'regression function',

760 citations

••

457 citations

••

TL;DR: In this paper, the authors describe methods for estimating the mean and standard deviation of the normal distribution based on estimates of the estimated mean and the standard deviation determined from the folded normal.

Abstract: Measurements are frequently recorded without their algebraic sign. As a consequence, the underlying distribution of measurements is replaced by a distribution of absolute measurements. When the underlying distribution is normal, the resulting distribution is called the “folded normal distribution”. The authors describe methods for estimating the mean and standard deviation of the normal distribution based on estimates of the mean and standard deviation determined from the folded normal. Tables are provided to assist in the estimation procedure and an example included.

344 citations

••

TL;DR: The present study deals with general classes of systems which contain two-terminal networks and most other kinds of systems considered previously as special cases, and investigates their combinatorial properties and their reliability.

Abstract: A number of recent publications have dealt with problems of analyzing the performance of multi-component systems and evaluating their reliability. For example, a comprehensive theory of two-terminal networks was presented in [I] by Moore and Shannon who, among other results, have developed methods for obtaining highly reliable systems using components of low reliability; some of their procedures are credited to earlier work by von Neumann [2]. Several of the concepts and results of the present paper are generalizations of the corresponding concepts and results of the Moore-Shannon paper. A discussion of complex systems interpreted as Boolean functions may be found in the paper [3] by Mine. The present study deals with general classes of systems which contain two-terminal networks and most other kinds of systems considered previously as special cases, and investigates their combinatorial properties and their reliability. These classes consist, with several variants, of systems such that the more components...

288 citations

••

TL;DR: In this article, the development of process inspection schemes from the original methods of Shewhart to the new charts using cumulative sums, and surveys the present practice in the operation of schemes based on cumulative sums are presented.

Abstract: This paper, presented orally to the Gordon Research Conference on Statistics in Chemistry in July 1960, traces the development of process inspection schemes from the original methods of Shewhart to the new charts using cumulative sums, and surveys the present practice in the operation of schemes based on cumulative sums. In spitc of the completely different appearance of the visual records kept for Shewhart and cumulative sum charts, a continuous sequence of development from the one type of scheme to the other can be established. The criteria by which a particular process inspection scheme is chosen are also developed and the practical choice of schemes is described.

282 citations

••

273 citations

••

TL;DR: In this paper, a classical analysis of variance, based on a pattern involving d observations in each of the r.c cells formed by crossing r rows with c columns, is studied.

Abstract: : The case of a function of time is considered which is periodic with known period. A classical analysis of variance, based on a pattern involving d observations in each of the r.c cells formed by crossing r rows with c columns is studied. Periodic time functions with a fixed period and a stationary joint distribution are also considered.

203 citations

••

TL;DR: This paper is to present a simplified account of the motivation behind the spectral analysis of time-series and to give a heuristic discussion of the statistical problems which arise.

Abstract: The object of this paper is to present a simplified account of the motivation behind the spectral analysis of time-series and to give a heuristic discussion of the statistical problems which arise. It is directed mainly at statisticians with little experience of the theory and applications of spectral analysis. It is of necessity short and omits a great deal of important detail. A much lengthier exposition for non-statisticians is in the course of preparation.

174 citations

••

TL;DR: In this article, the maximum likelihood estimates of the mean and variance for simply truncated or simply censored samples drawn from a Normal distribution were derived, and a further worked example was provided.

Abstract: In a previous paper in Technometrics, Vol. 1, 1959, the author derived the maximum liklihood estimates of the mean and variance for simply truncated or simply censored samples drawn from a Normal distribution. This paper extends considerably the tables originally published, and contains a further worked example.

168 citations

••

TL;DR: In this article, the problem of group screening methods is discussed, where f factors are sub-divided into groups of k factors each, forming g "group-factors" and the group factors are then studied using a Plackett and Burman design in Q + 1 runs.

Abstract: This paper discusses the problem of group screening methods wherein f factors are sub-divided into groups of k factors each, forming g “group-factors”. The group factors are then studied using a Plackett and Burman design in Q + 1 runs. The two versions of the group factors are formed by maintaining all component factors at their upper and lonrer levels respectively. All factors in groups found to have a large effects are then studied in a second stage of experiments. The author discusses the problems of detection and false detection of factors, optimum group size, size of program, and the role of costs in this sequential form of experimentation.

••

TL;DR: The authors state that because there has been so much new work published in error-correcting codes, the preparation of this second edition proved to be a much greater task than writing the original book.

Abstract: Error-Correcting Codes, by Professor Peterson, was originally published in 1961. Now, with E. J. Weldon, Jr., as his coauthor, Professor Peterson has extensively rewritten his material. The book contains essentially all of the material of the first edition; however, the authors state that because there has been so much new work published in error-correcting codes, the preparation of this second edition proved to be a much greater task than writing the original book. The major additions are the chapters on majority-logic codes, synchronization, and convolutional codes. Much new material has also been added to the chapters on important linear block codes and cyclic codes. The authors cite some highly regarded books on recent work done in Eastern Europe and an extensive bibliography on coding theory in the Soviet Union [sic]. In its much-expanded form, Error-Correcting Codes may be considered another valuable contribution to computer coding.

••

TL;DR: The spectral techniques of transfer function estimation of linear systems to time invariant quadratic systems when a stationary Gaussian process is used as a driving function is extended.

Abstract: In recent years the use of stationary random inputs as a forcing function to experimentally determine the transfer function of linear systems has become widespread. The procedure involves the measurement of spectra and crossspectra between the input and output and the formation of the proper ratio. There are two basic reasons for using random testing functions. For many situations, particularly mechanical ones, these inputs are easier to generate than say steps or impulses and frequently they can be made to more closely approximate the in-service input. (The latter attribute is an advantage since the linearity of the system may not be complete but may be sufficiently so over the operating range of interest). A simple measure of linearity is immediately available i.e., the coherency. Although many experimenters try to generate a Gaussian stationary process for the forcing function this is not necessary from an expected value veiwpoint. Actually the probability structure plays no role in the general logic of the detailed procedure since only second moment characteristics of the process are relevant to the expected value calculations. However, a Gaussian input can be convenient since sometimes the evaluation of the variability of the estimates is simplified. In studying higher order systems by driving them with a random forcing function this use of a Gaussian process makes the calculations of the expected values considerably easier. In this paper we shall extend the spectral techniques of transfer function estimation of linear systems to time invariant quadratic systems when a stationary Gaussian process is used as a driving function.

••

TL;DR: In this article, conditions on the life distribution of the original articles are found which will insure this, and the Weibull, gamma, exponential, extreme value and log-normal life distributions are examined in detail.

Abstract: When everything possible has been done to produce articles with long lives, there remains the possibility that a further improvement in the articles may be obtained by running them, for some time, under realistic conditions. The fraction that does not fail may have a longer mean remaining life than the original articles. In this paper conditions on the life distribution of the original articles are found which will insure this. The Weibull, gamma, exponential, extreme value and log-normal life distributions are examined in detail. The most interesting case is the log-normal, for which it is always possible to increase the mean life to any extent desired by continuing to test until a sufficiently large number of articles have failed.

••

TL;DR: In this paper, Statistical Theory and Methodology in Science and Engineering, Vol. 3, No. 4, pp. 569-569, is discussed. And the authors propose a method for statistical theory and methodology in science and engineering.

Abstract: (1961). Statistical Theory and Methodology in Science and Engineering. Technometrics: Vol. 3, No. 4, pp. 569-569.

••

TL;DR: In this article, a general formula for the rth moment of the folded normal distribution is obtained, and formulae for the first four non-central and central moments are calculated explicitly.

Abstract: The general formula for the rth moment of the folded normal distribution is obtained, and formulae for the first four non-central and central moments are calculated explicitly. To illustrate the mode of convergence of the folded normal to the normal distribution, as μ/σ = θ increases, the shape factors β f1 and β f2 were calculated and the relationship between them represented graphically. Two methods, one using first and second moments (Method I) and the other using second and fourth moments (Method II) of estimating the parameters μ and σ of the parent normal distribution are presented and their standard errors calculated. The accuracy of both methods, for various values of θ, are discussed.

••

TL;DR: In this paper, average run-lengths are evaluated for V-mask quality control schemes based on cumulative deviation charts when the observations are Normally distributed and either independent or members of a certain serially correlated class.

Abstract: Average run-lengths are evaluated for V-mask quality control schemes based on cumulative deviation charts when the observations are Normally distributed and either independent or members of a certain serially correlated class.

••

TL;DR: In this paper, the behavior of several statistical life testing procedures based on the exponential failure law was studied, and it was found that these statistical techniques, which are widely used, are very sensitive to departures from initial assumptions, and applying these techniques to lie test data when the exponential fail law is not satisfied may result in substantially increasing the probability of accepting components or equipments having poor mean-timsto-failure.

Abstract: Almost all the statistical procedures in current use for evaluating the reliability of components or equipment rest on the assumption that the failure times follow the exponential distribution. However, in practical situations one rarely has enough data to determine whether failure times are actually exponential. This paper studies the behavior of several statistical life testing procedures based on the exponential failure law if the true failure law is the Weibull distribution. It is found that these statistical techniques, which are widely used, are very sensitive to departures from initial assumptions. Applying these techniques to lie test data when the exponential failure law is not satisfied may result in substantially increasing the probability of accepting components or equipments having poor mean-timsto-failure. This paper also develops convenient analytic techniques for approximating (i) the distribution of sums of independent random variables, and (ii) the characteristics of sequential procedure...

••

TL;DR: In this paper, the authors developed fractional replicate plans which consist of irregular fractions of factorial experiments and used them for the estimation of main effects and two-factor interactions with fewer trials than is required with an orthogonal plan.

Abstract: The development of fractional replicate plans which consist of irregular fractions of factorial experiments is presented. The method of constructing irregular fraction plans is developed for the general s n factorial experiment. However, the plans which are found to be of greatest practical value are plans for the 3/2 m replicate of the 2 n experiment. Although these plans introduce correlations between some of the estimates, the correlations do not affect individual tests on the parameters. Some irregular fraction plans permit the estimation of main effects and two-factor interactions with fewer trials than is required with an orthogonal plan. A straightforward procedure for obtaining estimates of the main effects and “important” interactions and estimates of their variances, covariances and correlation coefficients is presented for the 3/2 m replicate plans.

••

TL;DR: In this article, the authors present a dynamic programming and Markov process based approach to the problem of Markov Processes, which they call DPMs, with a focus on dynamic programming.

Abstract: (1961). RONALD A. HOWARD “Dynamic Programming and Markov Processes,”. Technometrics: Vol. 3, No. 1, pp. 120-121.

••

TL;DR: In this paper, a balanced incomplete block experiment is described in which the nine treatments were quantitative rather than qualitative, being actually two additives each at four levels and a third at one level.

Abstract: The analysis of balanced incomplete block experiments is discussed in most of the standard textbooks on experimental design. These discussions are usually confined to qualitative treatments; it being customary to obtain an adjusted sum of squares for treatments and to give procedures for determining the significance of the observed difference between two treatment totals. This paper describes a balanced incomplete block experiment in which the nine treatments were quantitative rather than qualitative, being actually two additives each at four levels and a third at one level. The unusual feature of the analysis is found in Section 3 where the adjusted sum of squares for treatments is subdivided into individual degrees of freedom, each of which is meaningful and specific to this example, and with which we obtain from the data response curves for the two factors which were used at four levels each.

••

TL;DR: In this paper, the problem of determining the joint prediction interval for the responses at each of K separate settings of the independent variables when all K predictions must be based upon the original fitted model is described.

Abstract: When a linear relationship has been fitted by least squares, the methods for securing a prediction interval for the response at some fixed value of the independent variable are explained in many statistical text books. This paper describes the somewhat more complex problem of determining the joint prediction interval for the responses at each of K separate settings of the independent variables when all K predictions must be based upon the original fitted model.

••

Virginia Tech

^{1}TL;DR: In this paper, the authors discuss some sequential multivariate techniques used in testing means both for the one-and two-sample cases, where the covariance matrix is known and when the population covariance is not known, a sequential x 2-test is employed.

Abstract: The purpose of thii paper is to discuss some sequential multivariate techniques used in testing means both for the one- and two-sample cases. When the population covariance matrix is known, a sequential x 2-test is employed. When the population covariance matrix is not known, this is replaced by a sequential T2-test. Some properties of these tests are discussed and certain other procedures are suggested for use in testing the assumption that the covariance matrix is known. These sequential procedures are then illustrated in a problem involving the acceptance sampling of ballistic missiles.

••

TL;DR: In this paper, the estimation of missing values for a general design is described and discussed, and formulae are provided for two well-known, three factor, second order rotatable designs, with zero to six center points.

Abstract: The estimation of missing values for a general design is described and discussed. Formulae are provided for the estimation of missing values for two well-known, three factor, second order rotatable designs, with zero to six center points. A worked example illustrates the use of the formulae in the case of the cube plus octahedron plus one center point design.

••

TL;DR: In this paper, two continuous chemical processes, regarded as input-output systems, were probed with stationary noise, and their frequency response characteristics estimated from spectral analyses of the input and output records.

Abstract: Two continuous chemical processes, regarded as input-output systems, were probed with stationary noise, and their frequency response characteristics estimated from spectral analyses of the input and output records. Statistical confidence bands for the estimated system gitin and phase were computed. The noises were superimposed on steady operating levels of the systems, and the analyses conducted, following Goodman [l], on the assumption that the output fluctuation in each case was the sum of a linear operation on the input fluctuation and a corrupting noise uncorrelated with the input. One process was a blending operation realized in bench scale hardware; the other was a digital computer simulation of a continuous stirred tank reactor. The blender was essentially linear; the reactor, highly nonlinear. For each process, the theoretical exact (or linearized) frequency response characteristic was computed beforehand from the appropriate differential equations for comparison with the experimental results. The...

••

TL;DR: The two level fractional factorial designs of resolution five enable the experimenter to estimate independently all main effects and two-factor interactions under the assumptions that higher order interaction effects are negligible as mentioned in this paper.

Abstract: The two level fractional factorial designs of resolution five enable the experimenter to estimate independently all main effects and two-factor interactions under the assumptions that higher order interaction effects are negligible. By relaxing, very slightly, the requirement that all two-factor interactions be estimable, or that all estimated effects be orthogonal, the number of runs required for many resolution five designs can be greatly reduced.

••

TL;DR: In this paper, a method of obtaining symmetrical balanced fractions of 3 n and 2 m 3 n factorial designs is proposed, based on an analysis of such designs into a complex of concentric hyperspheres in an n-dimensional factor space.

Abstract: A method of obtaining symmetrical balanced fractions of 3 n and 2 m 3 n factorial designs is proposed, based on an analysis of such designs into a complex of concentric hyperspheres in an n-dimensional factor space. Two examples are constructed, a half-replicate of a 34 design and a half-replicate of a 23 32 design. Analysis shows both designs to have useful properties and to be relatively easy to analyse. Comparison is made with a half-replicate of a 23 32 design recently published by W. S. Connor.

••

TL;DR: In this article, a system whose components may be divided into two subsystems, each containing components whose lives are exponentially distributed but with different scale parameters, is considered, and a table is given of combinations of spares of each type so as to maximize system life.

Abstract: This paper considers a system whose components may be divided into two subsystems, each containing components whose lives are exponentially distributed but with different scale parameters. Each system can be assigned a store of replacement spares for failed components. Charts are presented for allocating a fixed number of spare components between the two sub systems to provide maximum reliability over some specified interval of time. Expected life is also taken as a maximand, and a table is given of combinations of spares of each type so as to maximize system life. The effect of non-exponential component densities upon these optimum allocations of components is also discussed.

••

TL;DR: In this paper, the authors proposed a spectrum estimation method that allows very strong frequencies to affect the spectrum estimates at distant frequencies, because the weighting function (spectrum window) cannot be made identical.

Abstract: Usual methods of spectrum estimation allow very strong frequencies to affect the spectrum estimates at distant frequencies, because the weighting function (spectrum window) cannot be made identical...