scispace - formally typeset
Search or ask a question

Showing papers in "Technometrics in 1965"


Journal ArticleDOI
TL;DR: The Handbook of Mathematical Functions with Formulas (HOFF-formulas) as mentioned in this paper is the most widely used handbook for mathematical functions with formulas, which includes the following:
Abstract: (1965). Handbook of Mathematical Functions with Formulas. Technometrics: Vol. 7, No. 1, pp. 78-79.

7,538 citations


Journal ArticleDOI
TL;DR: In this article, the authors present an approach to the design of a randomized block and a split-split-plot model for estimating the mean square of a given set of points.
Abstract: 1. The Experiment, the Design, and the Analysis 1.1 Introduction 1.2 The Experiment 1.3 The Design 1.4 The Analysis 1.5 Examples 1.6 Summary in Outline Further Reading Problems 2. Review of Statistical Inference 2.1 Introduction 2.2 Estimation 2.3 Tests of hypothesis 2.4 The Operating Characterisitc Curve 2.5 How Large a Sample? 2.6 Application to Tests on Variances 2.7 Application to Tests on Means 2.8 Assessing Normality 2.9 Applications to Tests on Proportions 2.10 Analysis of Experiments with SAS Further Reading Problems 3. Single-Factor Experiments with No Restrictions on Randomization 3.1 Introduction 3.2 Analysis of Variance Rationale 3.3 After ANOVA-What? 3.4 Tests of Means 3.5 Confidence Limits on Means 3.6 Components of Variance 3.7 Checking the Model 3.8 SAS Programs for ANOVA and Tests after ANOVA 3.9 Summary Further Reading Problems 4. Single-Factor Experiments -- Randomized Block and Latin Square Designs 4.1 Introduction 4.2 Randomized Complete Block Design 4.3 ANOVA Rationale 4.4 Missing Values 4.5 Latin Squares 4.6 Interpretations 4.7 Assessing the Model 4.8 Graeco-Latin Squares 4.9 Extensions 4.10 SAS Programs for Randomized Blocks and Latin Squares 4.11 Summary Further Reading Problems 5. Factorial Experiments 5.1 Introduction 5.2 Factorial Experiments: An Example 5.3 Interpretations 5.4 The Model and Its Assessment 5.5 ANOVA Rationale 5.6 One Observation Per Treatment 5.7 SAS Programs for Factorial Experiments 5.8 Summary Further Reading Summary 6. Fixed, Random, and Mixed Models 6.1 Introduction 6.2 Single-Factor Models 6.3 Two-Factor Models 6.4 EMS Rule 6.5 EMS Derivations 6.6 The Pseudo-F Test 6.7 Expected Mean Squares Via Statistical Computing Packages 6.8 Remarks 6.9 Repeatability and Reproducibility for a Measurement System Further Reading Problems 7. Nested and Nested-Factorial Experiments 7.1 Introduction 7.2 Nested Experiments 7.3 ANOVA Rationale 7.4 Nested-Factorial Experiments 7.5 Repeated-Measures Design and Nested-Factorial Experiments 7.6 SAS Programs for Nested and Nested-Factorial Experiments 7.7 Summary Further Reading Problems 8. Experiments of Two or More Factors -- Restrictions and Randomization 8.1 Introductin 8.2 Factorial Experiment in a Randomized Block Design 8.3 Factorial Experiment in a Latin Square Design 8.4 Remarks 8.5 SAS Programs 8.6 Summary Further Reading Problems 9.2 2 Squared Factorial 9.3 2 Cubed Factorial 9.4 2f Factorial 9.5 The Yates Method 9.6 Analysis of 2f Factorials When n=1 9.8 Summary Further Reading Problems 10. 3f Factorial Experiments 10.1 Introduction 10.2 3 Squared Factorial 10.3 3 Cubed Factorial 10.4 Computer Programs 10.5 Summary Further Reading Problems 11. Factorial Experiment -- Split-Plot Design 11.1 Introduction 11.2 A Split-Plot Design 11.3 A Split-Split-Plot Design 11.4 Using SAS to Analyze a Split-Plot Experiment 11.5 Summary Further Reading Problems 12. Factorial Experiment -- Confounding in Blocks 12.1 Introduction 12.2 Confounding Systems 12.3 Block Confounding -- No Replication 12.4 Blcok Confounding with Replication 12.5 Confounding in 3F Factorials 12.6 SAS Progrms 12.7 Summary Further Reading Problems 13. Fractional Replication 13.1 Introduction 13.2 Aliases 13.3 2f Fractional Replication 13.4 Plackett-Burman Designs 14. Taguchi Approach to the Design of Experiments 14.1 Introduction 14.2 The L4 (2 Cubed) Orthogonal Array 14.3 Outer Arrays 14.4 Signal-To-Noise-Ratio 14.5 The L8 (2 7) Orthogonal Array 14.6 The L16 (2 15) Orthogonal Array 14.7 The L9 (3 4) Orthogonal Array 14.8 Some Other Taguchi Designs 14.9 Summary Futher Reading Problems 15. Regression 15.1 Introduction 15.2 Linear Regression 15.3 Curvilinear Regression 15.4 Orthogronal Polynomials 15.5 Multiple Regression 15.6 Summary Further Reading Summary 16. Miscellaneous Topics 16.1 Introduction 16.2 Covariance Analysis 16.3 Response-Surface Experimentation 16.4 Evolutionary Operation (EVOP) 16.5 Analysis of Attribute Data 16.6 Randomized Incomplete Blocks -- Restriction On Experimentation 16.7 Youden Squares Further Reading Problems SUMMARY AND SPECIAL PROBLEMS GLOSSARY OF TERMS REFERENCES STATISTICAL TABLES Table A Areas Under the Normal Curve Table B Student's t Distribution Table C Cumulative Chi-Square Distribution Table D Cumulative F Distribution Table E.1 Upper 5 Percent of Studentized Range q Table E.2 Upper 1 Percent of Studentized Range q Table F Coefficients of Orthogonal Polynomials ANSWERS TO SELECTED PROBLEMS INDEX

1,256 citations


Journal ArticleDOI
TL;DR: In this paper, maximum likelihood equations are derived for estimating the distribution parameters from (i) complete samples, (ii) singly censored samples and (iii) progressively (multiple) censored samples.
Abstract: This paper is concerned with the two-parameter Weibull distribution which is widely employed as a model in life testing. Maximum likelihood equations are derived for estimating the distribution parameters from (i) complete samples, (ii) singly censored samples and (iii) progressively (multiple) censored samples. Asymptotic variance-covariance matrices are given for each of these sample types. An illustrative example is included.

611 citations


Journal ArticleDOI
TL;DR: In this article, the authors presented a method for obtaining simultaneous confidence intervals for the parameters of a multinomial distribution, and compared this method with the one suggested recently by Quesenberry and Hurst (1964).
Abstract: In this article we present a method for obtaining simultaneous confidence intervals for the parameters of a multinomial distribution, and we compare this method with the one suggested recently by Quesenberry and Hurst (1964). For the usual probability levels, we find, for example, that the confidence intervals introduced here have the desirable property that they are shorter than the corresponding intervals obtained by the Quesenberry-Hurst method. We also present methods for obtaining simultaneous confidence intervals for the differences among the parameters of the multinomial distribution, and we compare these methods with the one suggested earlier by Gold (1963) for studying linear functions of the multinomial parameters. For the usual probability levels, we find that the confidence intervals introduced in the present article have the desirable property that they are shorter than the corresponding intervals obtained by the Gold method applied to the differences among the multinomial parameters. In addi...

495 citations


Journal ArticleDOI
TL;DR: In this article, some relevant distribution theory is given and associated confidence bounds are derived for the differences (ratios) between the parameters of the parameters and some desirable properties of these procedures are studied and proved.
Abstract: This paper deals with some multiple decision (ranking and selection) problems. Some relevant distribution theory is given and the associated confidence bounds are derived for the differences (ratios) between the parameters. The selection procedures select a non-empty, small, best subset such that the probability is at least equal to a specified value P* that the best population is selected in the subset. General results are given both for the unknown location and scale parameters of the k populations. Some desirable properties of these procedures are studied and proved. Selection of a subset to contain all populations better than a standard is also discussed. Performance characteristics of some procedures for the normal means problem are studied and tables are given for the probabilities of selecting the ith ranked population and for the expected proportion and the expected average rank in the selected subset. A brief review of work by other authors in the problems of selection and ranking and in other re...

419 citations


Journal ArticleDOI
TL;DR: The Statistical Treatment of Fatigue Experiments as mentioned in this paper was the first statistical treatment of fatigue experiments in the field of computer vision, and was published in 1965, Vol. 7, No. 3, pp. 455-455.
Abstract: (1965). The Statistical Treatment of Fatigue Experiments. Technometrics: Vol. 7, No. 3, pp. 455-455.

357 citations


Journal ArticleDOI
E. W. Stacy1, G. A. Mihram1
TL;DR: In this paper, a three-parameter generalization of the gamma distribution is examined and three unbiased estimators for that parameter are derived along with their variance formulas, and minimum variance considerations are discussed by applying the Cramer-Rao Theorem.
Abstract: It is fairly commonplace in reliability analyses to encounter data which is incompatible with the exponential, Weibull, and other familiar probability models. Such data motivates research to enlarge the group of probability distributions which are useful to the reliability analyst. In this paper, we examine a three-parameter generalization of the gamma distribution and derive parameter estimation techniques for that distribution. Those techniques, in the general case, depend upon method of moments considerations which lead to simultaneous equations for which closed form solutions are not available. Graphic solution is proposed and aids to the computations are provided. Major concepts in the paper are summarized by means of a numerical example. Details are given for the special case in which only the scale parameter is unknown. Three unbiased estimators for that parameter are derived along with their variance formulas. Minimum variance considerations are discussed by application of the Cramer-Rao Theorem.

272 citations


Journal ArticleDOI
TL;DR: In this paper, a joint maximum likelihood estimation of the three parameters of the Gamma and Weibull populations is presented, from complete and censored samples, from the first m failure times in simulated life tests of n items.
Abstract: Iterative procedures are given for joint maximum-likelihood estimation, from complete and censored samples, of the three parameters of Gamma and of Weibull populations. For each of these populations, the likelihood function is written down, and the three maximum-likelihood equations are obtained. In each case, simultaneous solution of these three equations would yield joint maximum-likelihood estimators for the three parameters. The iterative procedures proposed to solve the equations are applicable to the most general case, in which all three parameters are unknown, and also to special cases in which any one or any two of the parameters are known. Numerical examples are worked out in which the parameters are estimated from the first m failure times in simulated life tests of n items (m ≤ n), using data drawn from Gamma and Weibull populations, each with two different values of the shape parameter.

241 citations


Journal ArticleDOI

186 citations


Journal ArticleDOI
TL;DR: In this article, a sequential design procedure is proposed for discriminating between two rival models, where the basic idea is to select for the next experimental point that at which the models differ the most.
Abstract: In most statistical literature on the design of experiments it is assumed that the correct form of the mathematical model is known and the problem is to select the experimental conditions so that some criterion is satisfied, for example, the parameters are estimated with maximum precision. Such an approach, however, ignores one important question that often confronts experimenters who, instead of having only one model known to be correct, have a number of rival candidate models to consider. Such situations can arise, for example, at the outset of investigations on the kinetics of solid-catalyzed gas reactions in chemical engineering. Often the immediate question in these circumstances is: how should experiments be planned so that the inadequate models can be detected and hence eliminated most eliiciently? In this paper a sequential design procedure is proposed for discriminating between two rival models. The basic idea is to select for the next experimental point that at which the models differ the most. ...

183 citations


Journal ArticleDOI
TL;DR: A new acceptance sampling plan which has a simple design and operation procedure, and which is intermediate in sample size efficiency between the single-sample plan and the sequential probability ratio sampling plan is introduced.
Abstract: This paper introduces a new acceptance sampling plan which has a simple design and operation procedure, and which is intermediate in sample size efficiency between the single-sample plan and the sequential probability ratio sampling plan.

Journal ArticleDOI
TL;DR: This paper is concerned with the dual problem of generating and analyzing data in experimental investigations in which the goal is to develop a suitable mechanistic model.
Abstract: This paper is concerned with the dual problem of generating and analyzing data in experimental investigations in which the goal is to develop a suitable mechanistic model. The problem is first distinguished from that of response surface methodology. With regard to the analysis of data, topics that are discussed include the behavior of estimated constants with an inadequate model, a diagnostic technique for modelbuilding, and the importance of visual scrutiny of data. With regard to the generation of data, the concept of placing a model in jeopardy is discussed. Designs for model discrimination and for parameter estimation are considered.


Journal ArticleDOI
TL;DR: In this article, the authors deal with the determination of the prior distribution of a batch selected at random from a fixed population of batches, using prior knowledge obtained from batches examined in the past.
Abstract: Let θ be the proportion of defectives in a batch selected at random from a fixed population of batches. The paper deals with the determination of the prior distribution of θ, using prior knowledge obtained from batches examined in the past. Assuming a beta type prior distribution, the following three situations are investigated: a. Past records provide the numbers r 1, r 2, …, r N of defectives found in N samples of size n from N previously inspected batches; b. The expected fraction defective in a batch, E(θ), and the probability, P, of a batch exceeding twice the expected fraction defective are approximately known; c. (c) The probabilities of θ exceeding a certain value θ1 and of θ falling below another value θ2 are approximately known. Methods of estimating the parameters of the prior distribution are given, and charts are provided to facilitate their determination. The effects of errors in the parameters on the posterior distribution are discussed in relation to sample size and number of defectives fo...

Journal ArticleDOI
Albert Madansky1
TL;DR: In this article, a method for determining approximate confidence limits for the reliability of series, parallel, and seriesparallel systems is given, based on observed failures of the individual components, where failures of a given component follow a binomial distribution with unknown parameter, the component reliability.
Abstract: Suppose a complex mechanism, e.g., a missile, is built up from a number of different types of components, where the reliability of each of the components has been estimated by means of separate tests on each of the components. This paper gives a method for combining such data to determine approximate confidence limits for the reliability of the complete mechanism. More precisely, a method of determining approximate confidence limits for the reliability of “series,” “parallel,” and “seriesparallel” systems is given, based on observed failures of the individual components. It is assumed that the failures are independent, and that failures of a given component follow a binomial distribution with unknown parameter, the component reliability. The large-sample properties of the likelihood-ratio test are then used to construct the appropriate confidence limits for the system reliability.

Journal ArticleDOI
TL;DR: In this article, the maximum likelihood estimation for the three parameters of the generalized gamma distribution with known location parameter is indicated, and it is noted that these estimators are asymptotically multivariate normally distributed.
Abstract: Unless sufficient evidence to the contrary exists, the exponential distribution is often assumed as a model for the failure density function in reliability predictions. The generalized gamma distribution, with known location parameter, is a three parameter distribution which encompasses the exponential, Weibull, gamma and many others. In this paper, (i) maximum likelihood estimation for the three parameters is indicated, (ii) it is noted that these estimators are asymptotically multivariate normally distributed, and (iii) using the distribution of the estimators, probability regions for the estimators of the parameters of the generalized gamma distribution are established for large sample situations. In situations where the generalized gamma can be assumed as the correct density function, the exponential and the Weibull are special cases. A method is presented using experimental or life data for rejecting (with a known probability of false rejection) the Weibull and (or) the exponential functions when the...

Journal ArticleDOI
TL;DR: In this article, a derivation of the maximum likelihood estimator, based on the first m out of n ordered observations, of the scale parameter θ of a Weibull population with known shape parameter K, was given.
Abstract: A derivation is given of the maximum likelihood estimator , based on the first m out of n ordered observations, of the scale parameter θ of a Weibull population with known shape parameter K. It is shown that 2m k/θK has a chi-square distribution with 2m degrees of freedom (independent of n). Use is made of this fact to set upper confidence bounds with confidence level 1 – P (lower confidence bounds with confidence level P) on the scale parameter θ. Formulas are given for the mean squared deviations of the upper and lower confidence bounds from the true value of the parameter. From these one can obtain expressions for the efficiency of confidence bounds and confidence intervals. The expected value of is also determined, and from it the unbiasing factor / by which must be multiplied to obtain an unbiased estimator . An expression for the variance of the unbiased estimator is found. Values of the unbiasing factor and of the variance of the unbiased estimator, both of which are independent of n, are tabled fo...

Journal ArticleDOI
TL;DR: In this article, the smoothed periodogram is considered from the point of view of its spectral window, and the results are compared with two standard spectral windows by a method which avoids defining a bandwidth.
Abstract: The purpose of this paper is to revive interest in the periodogram approach to time series analysis which, at present, is only of historical interest and is seldom used. During the late 1940's, when it was realized that the smoothed periodogram could be used to estimate the spectral density of a stationary time series, the method was impractical because of the amount of computations. This is no longer the case, but not realized by many applied workers. In this paper the smoothed periodogram is considered from the point of view of its spectral window. The results are compared with two standard spectral windows by a method which avoids defining a bandwidth. The spectral windows are normalized so that they have the same variance, and plotted. The user can then choose the window which best suits his needs. Rejection filtering, trigonometric regression and cross-spectral analysis are discussed. An example is given in which the spectra and cross-spectrum of a bivariate time series are estimated.

Journal ArticleDOI
TL;DR: In this paper, Beale measures of nonlinearity have been used to indicate when the degree of non-linearity in a nonlinear estimation problem is small enough to justify using the usual linear model theory results as approximations.
Abstract: Measures of non-linearity that have been developed by Beale (1960a), are designed to indicate when the degree of non-linearity in a non-linear estimation problem is small enough to justify using the usual linear model theory results as approximations. The validity and usefulness of the measures is examined by means of numerical examples. Explanations for the behaviour of the measures is offered and the results are discussed.

Journal ArticleDOI
TL;DR: In this article, a general discussion of multiple comparison procedures is presented, including when and why they should and should not be used, the importance of confidence procedures and their advantages over significance procedures, choice among multiple comparisons confidence procedures, and choice and description of error rates.
Abstract: Methods are provided for the rapid analysis of data in balanced single and double classifications. Sums of ranges provide short-cut measures of variability with only slight loss of efficiency. Tables of factors are provided to convert these sums directly into widths of simultaneous confidence intervals for simple comparisons of group totals. Two illustrative examples are treated in detail. The proposed procedures are recommended for routine use in initial analyses, though not to the exclusion of more refined procedures. The paper includes a general discussion of multiple comparison procedures: when they should and should not be used, the importance of confidence procedures and their advantages over significance procedures, choice among multiple comparisons confidence procedures, choice and description of error rates. It does not attempt to compare multiple comparison and multiple decision procedures.


Journal ArticleDOI
Satya D. Dubey1
TL;DR: In this article, several estimators for the scale and shape parameters of the Weibull law are obtained by postulating a stochastic model aimed at studying the aging process of certain inexpensive industrial products.
Abstract: In this paper several estimators for the scale and the shape parameters of the Weibull law are obtained by postulating a stochastic model aimed at studying the aging process of certain inexpensive industrial products. These estimators are consistent and asymptotically multi-normal.


Journal ArticleDOI
TL;DR: In this paper, minimum variance unbiased estimates of the cumulative distribution function for the normal, binomial, Poisson, and negative exponential distributions are given, assuming that one is interested in the fraction of product meeting fixed specification limits.
Abstract: Minimum variance unbiased estimates are given of the cumulative distribution function for the normal, binomial, Poisson, and negative exponential distributions. Although diverse areas of application might lead to interest in estimating the cumulative distribution function, most of the examples in this paper suppose that one is interested in the fraction of product meeting fixed specification limits.


Journal ArticleDOI
TL;DR: In this article, the smoothed periodogram is considered from the point of view of its spectral window, and the results are compared with two standard spectral windows by a method which avoids defining a bandwidth.
Abstract: The purpose of this paper is to revive interest in the periodogram approach to time series analysis which, at present, is only of historical interest and is seldom used. During the late 1940's, when it was realized that the smoothed periodogram could be used to estimate the spectral density of a stationary time series, the method was impractical because of the amount of computations. This is no longer the case, but not realized by many applied workers. In this paper the smoothed periodogram is considered from the point of view of its spectral window. The results are compared with two standard spectral windows by a method which avoids defining a bandwidth. The spectral windows are normalized so that they have the same variance, and plotted. The user can then choose the window which best suits his needs. Rejection filtering, trigonometric regression and cross-spectra analysis are discussed. An example is given in which the spectra and cross-spectrum of a bivariate time series are estimated.

Journal ArticleDOI
Joane Ilbott1, Jack Nadler1
TL;DR: In this paper, the assumption that the underlying distributions in both populations are normal has been investigated under the assumption of both underlying distributions are exponential, and the analysis supports the use of these procedures in some circumstances, although it is not especially favorable to Nelson's test plan.
Abstract: In a recent paper Nelson [8] recommends that the basis for distinguishing between two populations of lifetimes be the number of values in a sample from one population that are less than (i.e., precede) the smallest value in a sample from the other. His proposal is based on an empirical study conducted by Epstein [4] which assumes that the underlying distributions in both populations are normal. Some properties of general one-sided precedence life tests are investigated here under the assumption that both underlying distributions are exponential. Our analysis supports the use of these procedures in some circumstances, although it is not especially favorable to Nelson's test plan.

Journal ArticleDOI
TL;DR: In this article, a multiple comparisons sign test comparing each treatment with a control is presented, and the tables are also used to construct non-parametric joint confidence intervals, and it is shown how t.o find the per-comparison error rate for a given experimentwise error rate and vice versa.
Abstract: Tables are presented for a multiple comparisons sign test comparing each treatment with a control. An illustration of the test procedure is provided, and the tables are also used to construct non-parametric joint confidence intervals. It is also shown how t.o find the per-comparison error rate for a given experimentwise error rate and vice versa.