scispace - formally typeset
Search or ask a question

Showing papers in "Technometrics in 1963"


Journal ArticleDOI
TL;DR: In this article, the authors propose a Cybernetics or Control and Communication in the Animal and the Machine (CACM) for controlling and communicating with animals and the machines.
Abstract: (1963). Cybernetics, or Control and Communication in the Animal and the Machine. Technometrics: Vol. 5, No. 1, pp. 128-130.

934 citations


Journal ArticleDOI
F. J. Anscombe1, John W. Tukey1
TL;DR: A number of methods for examining the residuals remaining after a conventional analysis of variance or least-squares fitting have been explored during the past few years as discussed by the authors, and a variety of these techniques more easily available, so that they can be tried out more widely.
Abstract: A number of methods for examining the residuals remaining after a conventional analysis of variance or least-squares fitting have been explored during the past few years. These give information on various questions of interest, and in particular, aid in assessing the validity or appropriateness of the conventional analysis. The purpose of this paper is to make a variety of these techniques more easily available, so that they can be tried out more widely. Techniques of analysis, some graphical, some wholly numerical, and others mixed, are discussed in terms of the residuals that result from fitting row and column means to entries in a two-way array (or in several two-way arrays). Extensions to more complex situations, and some of the uses of the results of examination, are indicated.

487 citations


Journal ArticleDOI
TL;DR: In this paper, maximum likelihood estimators of the distribution parameters are derived for the normal, and for the exponential distribution when samples are progressively censored, where the sample specimens remaining after each stage are continued under observation until failure or until a subsequent stage of censoring.
Abstract: In life and dosage-response studies, progressively censored samples arise when at various stages of an experiment, some though not all of the surviving sample specimens are eliminated from further observation. The sample specimens remaining after each stage of censoring are continued under observation until failure or until a subsequent stage of censoring. In this paper maximum likelihood estimators of the distribution parameters are derived for the normal, and for the exponential distribution when samples are progressively censored.

427 citations


Journal ArticleDOI
TL;DR: In this article, Statistical Inference for Markov Processes (SINP) is used for statistical inference for the Markov process, and it is shown that SINP can be used to identify Markov processes.
Abstract: (1963). Statistical Inference for Markov Processes. Technometrics: Vol. 5, No. 3, pp. 413-415.

369 citations


Journal ArticleDOI
Abstract: This paper deals with the problem of making inferences about the mean of an exponential distribution when the sample is “time-censored”. The exact sampling distribution of the maximum likelihood estimate is obtained and used to show that the asymptotic sampling theory is inadequate unless the sample size is very large. An approximation to the distribution is proposed for use in small samples and compared with a method suggested by Bartlett (1953a). An alternative estimate is suggested which is both simple and highly efficient in certain circumstances. The methods are illustrated by examples.

168 citations


Journal ArticleDOI
M. V. Menon1
TL;DR: In this article, the shape parameter c and the scale parameter b of the Weibull distribution on the assumption that the location parameter is known were estimated by first finding an estimate of 1/c, and then setting ĉ = 1/.
Abstract: Estimates ĉ and are proposed for the shape parameter c and the scale parameter b of the Weibull distribution on the assumption that the location parameter is known: ĉ obtained by first finding an estimate of 1/c, and then setting ĉ = 1/. When b is unknown, is a consistent and non-negative estimate of d, with a bias which tends to vanish as the sample size increases and with an asymptotic efficiency of about 55%. When b is known, is an unbiased, non-negative and consistent estimate of d, and its efficiency is approximately 84%. An estimate In∧b of In b is found. Its asymptotic efficiency is 95%. It is proposed that exp(In∧b) be used to estimate b.

165 citations


Journal ArticleDOI
TL;DR: The Moore-Shannon inequality is extended to the case of unequal component reliabilities, permitting a simple demonstration of the S-shapedness properties of system reliability functions.
Abstract: Some general aspects of the reliability of coherent systems whose components are independent, but not necessarily of the same reliability are explored. Upper and lower bounds, which can be computed directly from the minimal paths and minimal cuts of a system, are found for system reliability. The Moore-Shannon inequality is extended to the case of unequal component reliabilities, permitting a simple demonstration of the S-shapedness properties of system reliability functions.

143 citations


Journal ArticleDOI
TL;DR: In this paper, a column basis and a diagonal matrix of subclass or incidence numbers are derived for testing a hierarchy of hypotheses in the non-orthogonal case, which can be controlled in machine computation by a symbolic representation of each degree of freedom for hypothesis in the analysis.
Abstract: A formulation of analysis of variance based on a model for the subclass means is presented. The deficiency of rank in the model matrix is handled, not by restricting the parameters, but by factoring the matrix as a product of two matrices, one providing a column basis for the model and the other representing linear functions of the parameters. In terms of the column basis and a diagonal matrix of subclass or incidence numbers, a compact matrix solution is derived which provides for testing a hierarchy of hypotheses in the non-orthogonal case. Two theorems are given showing that a column basis for crossed and/or nested designs can be constructed from Kronecker products of equi-angular vectors, contrast matrices, and identity matrices. This construction can be controlled in machine computation by a symbolic representation of each degree of freedom for hypothesis in the analysis. Provision for a multivariate analysis of variance procedure for multiple response data is described. Analysis of covariance, both ...

136 citations


Journal ArticleDOI
TL;DR: In this article, Hoerl discussed a method for examining a second order response surface, and provided a mathematically simpler derivation of the technique and proofs of some stated properties.
Abstract: In a 1959 paper, A. E. Hoerl discussed a method for examining a second order response surface. Thii paper provides a mathematically simpler derivation of the technique and proofs of some stated properties.

115 citations


Journal ArticleDOI
TL;DR: In this paper, the authors compare four process inspection schemes and provide a short table of cusum schemes using the range, and compare the four schemes with the results of the warning lines on the Shewhart chart and a more sensitive scheme using cumulative sums (cusums) of the sample ranges together with a rule for deciding what change of direction of the path should require action.
Abstract: The usual process inspection scheme for controlling the standard deviation (S.D.) of a normal distribution is to plot a Shewhart chart on which are recorded the ranges of small samples; a common size is 5 items. Action is taken if any sample range exceeds a previously determined amount. The rapidity with which the scheme detects a departure from the accepted value of σ can be increased by considering the results of several of the latest samples together. A fixed number of samples can be taken into account by Warning Lines on the Shewhart-type chart (4); a more sensitive scheme uses cumulative sums (cusums) of the sample ranges together with a rule for deciding what change of direction of the cusum path should require action. Both these approaches can be applied to gauged observations. In this paper we compare these four schemes and provide a short table of cusum schemes using the range.

111 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present a review and describes cumulative sum charts with special emphasis on the practical problems of successful application, and present a set of techniques to apply these charts.
Abstract: This paper reviews-and describes cumulative sum charts with special emphasis on the practical problems of successful application.

Journal ArticleDOI
TL;DR: A series expansion for the renewal function associated with the Weibull distribution was developed in this article for all values of the time t, and the coefficients of the powers of t are easily calculated numerically by a recursive procedure.
Abstract: A series expansion is developed for the renewal function associated with the Weibull distribution. The expansion is valid for all values of the time t, and the coefficients of the powers of t are easily calculated numerically by a recursive procedure.

Journal ArticleDOI
TL;DR: A class of models for system reliability is presented which introduces the notion of random variability of environment, and hence of instantaneous failure rate or “hazard,” but does not lead to a prediction of series system failure rate based on the usual procedure of adding component failure rates.
Abstract: A class of models for system reliability is presented which (a) introduces the notion of random variability of environment, and hence of instantaneous failure rate or “hazard,” (b) leads to exponentially distributed system time to failure, and to exponentially distributed component time to failure when components are exposed to the environment in isolation, but (c) does not lead to a prediction of series system failure rate based on the usual procedure of adding component failure rates. If the usual procedure is followed, it is shown that underestimates of system reliability are obtained. A simple spares provisioning problem is investigated when such a model is assumed to hold.

Journal ArticleDOI
TL;DR: In many analyses of variance it is appropriate to consider the effects as randomly chosen from a population; thus providing a model II analysis in the terminology of Eisenhart as mentioned in this paper, and interest then centers on the variances of the effects and on the components of variance.
Abstract: In many analyses of variance it is appropriate to consider the effects as randomly chosen from a population; thus providing a model II analysis in the terminology of Eisenhart. Interest then centers on the variances of the effects and on the components of variance. The usefulness of this technique is limited by the frequent occurrence of negative estimates of the variance components which are by definition non-negative quantities. This paper presents a method of data analysis, applicable to many of the more popular models, which always yields non-negative estimates. Recently a simple algorithm, the "pool-the-minimum-violator" algorithm, has been developed [11] as a result of maximizing the likelihood function of the mean squares subject to a set of constraints. This paper has been prepared as a less mathematical companion article to [11] in order to treat the more applied aspects of the problem. 2. AN EXAMPLE

Journal ArticleDOI
Lloyd S. Nelson1
TL;DR: In this article, a distribution-free, two-sample life test based on the order of early failures is presented, which covers all combinations of sample sizes up to twenty for one-sided (two-sided) significance levels of 0.005 to 0.10 (one-sided).
Abstract: A distribution-free, two-sample life test based on the order of early failures is presented. Tables are provided which cover all combinations of sample sizes up to twenty for one-sided (two-sided) significance levels of 0.05 (0.10), 0.025 (0.05), and 0.005 (0.01). An empirical approximation is given which yields critical values for any combination of sample sizes above five in the significance-level range of 0.005 to 0.10 (one-sided). The procedure is illustrated by worked examples.

Journal ArticleDOI
TL;DR: In this article, a method of extending Moran's theory of finite reservoirs to take account of serial correlation in the sequence of inflows is presented, where the authors assume that the structure of this sequence can be adequately approximated by a Markov chain, and then work with the bivariate Markov process describing the joint distribution of levels and inflows, from which the marginal limiting distribution of the levels may be derived.
Abstract: This paper outlines a method of extending Moran's theory of finite reservoirs so as to take account of serial correlation in the sequence of inflows. The technique developed is to assume that the structure of this sequence can be adequately approximated by a Markov chain, and then to work with the bivariate Markov process describing the joint distribution of levels and inflows, from which the marginal limiting distribution of levels may be derived. Various withdrawal policies can be used, including policies involving a stochastic element. Explicit results are obtained for a reservoir of general capacity with a 3-value Markov input.

Journal ArticleDOI
TL;DR: In this article, a simple sufficient condition is given for a system to have an increasing failure rate when the identical components comprising it have an increased failure rate, and upper and lower bounds on failure rate are obtained in terms of component failure rates.
Abstract: A simple sufficient condition is given for a system to have an increasing failure rate when the identical components comprising it have an increasing failure rate. Systems which function if and only if at least k of the n components function (“k out of n” systems) satisfy this condition. For systems of non-identical components, upper and lower bounds on failure rate are obtained in terms of component failure rates. These bounds are increasing functions of time for “k out of n” structures having components with increasing failure rates.

Journal ArticleDOI
TL;DR: In this article, an analytic procedure for obtaining variances of estimates of variance components in a general multi-way classification is described, and the results are presented in tabular form for selected sets of parameter values.
Abstract: An analytic procedure for obtaining variances of estimates of variance components in a general multi-way classification is described. As an application, three methods for estimating variance components are compared for a two-way classification. Since the variances of the estimates are affected by the magnitude of the true variance components (parameters) as well as the arrangement of the subclass numbers (n ij 's), a numerical tabulation is necessary in order to make a comparison. Using a UNIVAC 1105, the variances of the estimates of variance components are evaluated over a substantial range of parameters and n ij 's. These results are presented in tabular form for selected sets of parameter values.

Journal ArticleDOI
TL;DR: In this article, it was shown that the two-level factorials and fractionals and also particular partially duplicated fractionals are remarkably robust with respect to errors in the x's.
Abstract: For the case of one variable Berkson has shown that a standard statistical analysis can be justified even when the levels of the independent variable x are subject to error provided that the response relationship is linear in x. parallel result is true for any number of independent variables. For a non-linear relationship the effect depends on the design chosen. The two-level factorials and fractionals and also particular partially duplicated fractionals are remarkably robust with respect to errors in the x's. This paper, which appeared originally in the Bulletin of the International Statistical Institute, Vol. XXXVIII: Part III, Tokyo, 1961, pp. 339–355, is reprinted in Technometrics with the kind permission of the author and the editors of the Bulletin of the International Statistics Institute.

Journal ArticleDOI
TL;DR: In this article, a method of extending Moran's theory of finite reservoirs to take account of serial correlation in the sequence of inflows is presented, where the authors assume that the structure of this sequence can be adequately approximated by a Markov chain, and then work with the bivariate Markov process describing the joint distribution of levels and inflows.
Abstract: This paper outlines a method of extending Moran's theory of finite reservoirs so as to take account of serial correlation in the sequence of inflows. The technique developed is to assume that the structure of this sequence can be adequately approximated by a Markov chain, and then to work with the bivariate Markov process describing the joint distribution of levels and inflows, from which the marginal limiting distribution of levels may be derived. Various withdrawal policies can be used, including policies involving a stochastic element. Explicit results are obtained for a reservoir of general capacity with a a-value Markov input.

Journal ArticleDOI
TL;DR: In this paper, it was shown that the moments of order statistics in samples drawn from a population symmetric about zero can be expressed in terms of the moments in samples obtained from the population obtained by folding the symmetric population at zero.
Abstract: It is shown that the moments of order statistics in samples drawn from a population symmetric about zero can be expressed in terms of the moments of order statistics in samples drawn from the population obtained by folding the symmetric population at zero. Also, odd moments of the largest order statistic in samples of even sizes and even moments of the largest order statistic in samples of odd sizes drawn from the folded population can be expressed purely in terms of the moments of the order statistics in samples drawn from the symmetric population. The cumulative round off error involved in numerical evaluation of the moments of order statistics from the symmetric population, using a table of the moments of the order statistics from the folded population, is not serious except for the mixed moments in which case, a bound on the maximum error is available. An application is also considered.

Journal ArticleDOI
TL;DR: In this paper, the authors investigated the consequences that occur when the assumptions concerning the prior distribution must be modified, and compared the efficiency of double sampling procedures with that of optimum single sampling procedures.
Abstract: Knowledge of the costs of sampling and of the prior distribution of the number of defectives plays an important role in the determination of an optimum sampling plan. This paper investigates some of the consequences that occur when the assumptions concerning the prior distribution must be modified. The efficiency of optimum double sampling procedures is compared with that of optimum single sampling procedures.

Journal ArticleDOI
TL;DR: In this paper, the choice of non-linear transformations in the analysis of data can frequently be simplified by restricting the possible transformations to a particular family of polynomials, such as the simple family.
Abstract: The choice of non-linear transformations in the analysis of data can frequently be simplified by restricting the possible transformations to a particular family. Tukey has shown that the "simple family" has many desirable properties from this point of view. This family can be represented as the set of solutions to a third order differential equation and the constant of this equation provides a convenient index of the family. This index may be approximated by substituting the given data into the corresponding difference equation. The resulting approximation can then be used for rough solutions or as a starting value for the iterative solution of the maximum likelihood equations given by Turner. Two examples are provided to demonstrate the procedure. In the analysis of data in the physical and engineering sciences it is not uncommon to consider the possibility of applying a transformation to one or more of the variables under study. The most obvious situation of this type occurs when the dependent variable is not linearly related to the independent variable and there is no reason to think that the underlying relation is a polynomial of a particular degree. Polynomials are, of course, both easy to handle and widely used. However, higher degree polynomials raise serious numerical problems and leave much to be desired from the point of view of interpretation. We take it as self-evident that almost any physical scientist would rather consider

Journal ArticleDOI
TL;DR: In this paper, a polynomial regression model was proposed to approximate the time integral of the retirement rate, where the covariance structure for the regression of y(t) on t was obtained from the multinomial distribution when the data are grouped.
Abstract: The nature of the process of retirement of groups of industrial properties is so complex that it is difficult to postulate adequate mathematical models such aa those, employed in life-testing, etc. Assuming that there exists a survivor function, M(t), representing the proportion of a group remaining in service at time t, such function is given by exp [–y(t)], where y(t) is the time integral of the retirement rate, r(t) = –d(logM(t))/dt. Rather than to hazard a guess aa to the parametric form of any of these functions, it is the intent of this paper to approximate the integral, y(t), by a polynomial, whereupon M(t) may be graduated by the descending exponential function. For large samples it is found that the covariance structure for the polynomial regression of y(t) on t may be obtained from the multinomial distribution when the data are grouped. Thus the method of weighted least squares may be employed in fitting y(t). “Censored” data in no way vitiate the method.

Journal ArticleDOI
TL;DR: In this article, the authors generalized the Cornish-Fisher representation of a probability density function as a differential operator involving its cumulants applied to a normal density function by means of a cumulant adjustment generating function, which leads to a relation between values of the variates that bound regions of equal probability for the two distributions.
Abstract: The Cornish-Fisher representation of a probability density function, as a differential operator involving its cumulants applied to a normal density function, has been generalized so as to relate two arbitrary density functions by means of a cumulant adjustment generating function. This leads to a relation between values of the variates that bound regions of equal probability for the two distributions. If one distribution is normal or of other simple form, this relation can be used to obtain values of the variate for the other distribution in terms of those for the first at any specified probability. For a pair of univariate distributions, an expansion in series has been obtained. Similar methods enable formal expression to be given to the moment generating function for an arbitrarily truncated portion of the second distribution, in terms of the same truncation of the first distribution and operators based upon the cumulant adjustment generating function.

Journal ArticleDOI
TL;DR: In this paper, the authors present Mathematical Models in the Social Sciences (MMSMS) for the social sciences. But they do not discuss the model's application in the field of computer science.
Abstract: (1963). Mathematical Models in the Social Sciences. Technometrics: Vol. 5, No. 2, pp. 288-288.

Journal ArticleDOI
TL;DR: In this article, a cumulative sum control chart for folded normal variates is described, which is useful when the sign of an approximately normally distributed quantity is lost in measurement, and an assessment is given of the information lost by omission of the sign.
Abstract: Methods of construction of cumulative sum control charts for folded normal variates are described. These charts are likely to be useful when the sign of an approximately normally distributed quantity is lost in measurement. Some assessment is given of the information lost by omission of the sign.


Journal ArticleDOI
TL;DR: The significance points for the 2 × 3 contingency table are tabulated in the cases where the three groups have equal numbers for A = 3 (1) 20 assuming fixed marginal totals as mentioned in this paper.
Abstract: The significance points for the 2 × 3 contingency table are tabulated in the cases where the three groups have equal numbers (=A) for A = 3 (1) 20 assuming fixed marginal totals. Four significance levels .05, .025, .01, .001 are tabulated, using the randomized test principle of Freeman and Halton. Examples of applications are given.

Journal ArticleDOI
TL;DR: In this article, the choice of non-linear transformations in the analysis of data can frequently be simplified by restricting the possible transformations to a particular family, which is represented as the set of solutions to a third order differential equation.
Abstract: The choice of non-linear transformations in the analysis of data can frequently be simplified by restricting the possible transformations to a particular family. Tukey has shown that the “simple family” has many desirable properties from this point of view. This family can be represented as the set of solutions to a third order differential equation and the constant of this equation provides a convenient index of the family. This index may be approximated by substituting the given data into the corresponding difference equation. The resulting approximation can then be used for rough solutions or as a starting value for the iterative solution of the maximum likelihood equations given by Turner. Two examples are provided to demonstrate the procedure.