# Showing papers in "Technometrics in 1966"

••

4,028 citations

••

TL;DR: Rao's Linear Statistical Inference and Its Applications as discussed by the authors is one of the earliest works in statistical inference in the literature and has been translated into six major languages of the world.

Abstract: "C. R. Rao would be found in almost any statistician's list of five outstanding workers in the world of Mathematical Statistics today. His book represents a comprehensive account of the main body of results that comprise modern statistical theory." -W. G. Cochran "[C. R. Rao is] one of the pioneers who laid the foundations of statistics which grew from ad hoc origins into a firmly grounded mathematical science." -B. Efrom Translated into six major languages of the world, C. R. Rao's Linear Statistical Inference and Its Applications is one of the foremost works in statistical inference in the literature. Incorporating the important developments in the subject that have taken place in the last three decades, this paperback reprint of his classic work on statistical inference remains highly applicable to statistical analysis. Presenting the theory and techniques of statistical inference in a logically integrated and practical form, it covers: * The algebra of vectors and matrices * Probability theory, tools, and techniques * Continuous probability models * The theory of least squares and the analysis of variance * Criteria and methods of estimation * Large sample theory and methods * The theory of statistical inference * Multivariate normal distribution Written for the student and professional with a basic knowledge of statistics, this practical paperback edition gives this industry standard new life as a key resource for practicing statisticians and statisticians-in-training.

1,669 citations

••

TL;DR: In this article, the authors unify and extend previously published characterizations of moving average, geometric moving average and cumulative sum control chart procedures, and present comparable characterisations of two procedures based on tests described but not evaluated in earlier papers.

Abstract: This paper unifies and extends previously published characterizations of moving average, geometric moving average, and cumulative sum control chart procedures. It presents comparable characterizations of two procedures based on tests described but not evaluated in earlier papers. One of these procedures is based on a test devised by Girshick and Rubin that is optimal under a particular set of idealized conditions. The other procedure is based on run sum tests, which are generalizations of the type of run test that counts the number of consecutive points that exceed a limit, the generalization taking into account the extent that points in such a run exceed the limit.

534 citations

••

TL;DR: In this article, the maximum likelihood estimates for a single truncated normal population as derived by Hald were used, and an approximation to the likelihood function of the entire sample was used to maximize this yield two iteration formulas.

Abstract: n observations are taken from a mixture of K normal subpopulations, where the value of K is known. It is assumed that these n observations are given as N frequencies from equally spaced intervals. Initial guesses of the K means, K variances, and K − 1 proportions are made using the maximum likelihood estimates for a single truncated normal population as derived by Hald. Then an approximation to the likelihood function of the entire sample is used, and attempts to maximize this yield two iteration formulas. In practice, the method of steepest descent always converged, although the rate was not always fast. Special cases of equal variances and variances proportional to the square of the mean are also considered.

518 citations

••

TL;DR: A review of the literature of response surface methodology, emphasizing especially the practical applications of the method can be found in this article, where a comprehensive bibliography is also provided, with a focus on the application of the response surface strategy in chemistry and chemical engineering.

Abstract: Response surface methodology, an experimental strategy initially developed and described by Box and Wilson, has been employed with considerable success in a wide variety of situations, especially in the fields of chemistry and chemical engineering. It is the purpose of this paper to review the literature of response surface methodology, emphasizing especially the practical applications of the method. A comprehensive bibliography is included.

402 citations

••

TL;DR: In this article, the authors proposed fractional factorial designs for sampling the 2 k possibilities and a new statistic proposed by C. Mallows to simplify the search for the best candidate.

Abstract: Selecting a suitable equation to represent a set of multifactor data that was collected for other purposes in a plant, pilot-plant, or laboratory can be troublesome. If there are k independent variables, there are 2 k possible linear equations to be examined; one equation using none of the variables, k using one variable, k(k – 1)/2 using two variables, etc. Often there are several equally good candidates. Selection depends on whether one needs a simple interpolation formula or estimates of the effects of individual independent variables. Fractional factorial designs for sampling the 2 k possibilities and a new statistic proposed by C. Mallows simplify the search for the best candidate. With the new statistic, regression equations can be compared graphically with respect to both bias and random error.

270 citations

••

TL;DR: The extreme vertices design as mentioned in this paper was developed as a procedure for conducting experiments with mixtures when several factors have constraints placed on them, and the constraints so imposed reduce the size of the factor space which would result had the factor levels been restricted to only 0 to 100 percent.

Abstract: The extreme vertices design is developed as a procedure for conducting experiments with mixtures when several factors have constraints placed on them. The constraints so imposed reduce the size of the factor space which would result had the factor levels been restricted to only 0 to 100 per cent. The selection of the vertices and the various centroids of the resulting hyper-polyhedron as the design is a method of determining a unique set of treatment combinations. This selection is motivated by the desire to explore the extremes as well as the center of the factor space. A non-linear programming procedure is used in determining the optimum treatment combination.

223 citations

••

TL;DR: In this paper, it is assumed that the requirements for the validity of least square analysis are satisfied for unplanned data that produces a great deal of trouble, which is the tacit assumption that the requirement for the validation of least squares analysis is satisfied for the data that is most often used to describe analysis of unplanned happenings.

Abstract: where the O's are unknown parameters, the x's known constants, and the c's random variables uncorrelated and having the same variance and zero expectation, then estimates bo , b1 , .., bk of the S3's obtained by minimizing E (y -_ )2 with y = boxo + b 2x~ + b6x2 + * * * + bkXk are unbiassed and have smallest variance among all linear unbiassed estimates. The method of least squares is used in the analysis of data from planned experiments and also in the analysis of data from unplanned happenings. The word "regression" is most often used to describe analysis of unplanned data. It is the tacit assumption that the requirements for the validity of least squares analysis are satisfied for unplanned data that produces a great deal of trouble. Whether the data are planned or unplanned the quantity c, which is usually quickly dismissed as a random variable having the very specific properties mentioned above, really describes the effect of a large number of "latent" variables Xk+l , X+2 , ... , Xm which we know nothing about. If we suppose that it is enough to consider the linear effects of these latent variables (which would often be realistic for small variations in xk+ , -.. , x~) we should have

161 citations

••

112 citations

••

TL;DR: In this article, the binomial group-testing problem is extended to the case in which the common probability p of a unit being defective is unknown, and a Bayes "non-mixing" procedure R (1) is derived and compared with other procedures, in particular with the corresponding procedure R 1 that requires the knowledge of p and with another procedure based on continually revising the maximum likelihood estimate of p; the latter is called an empirical Bayes solution.

Abstract: The binomial group-testing problem is extended to the case in which the common probability p of a unit being defective is unknown. A Bayes “non-mixing” procedure R (1) is derived and compared with other procedures, in particular with the corresponding procedure R 1 that requires the knowledge of p and with another procedure based on continually revising the maximum likelihood estimate of p; the latter is called an empirical Bayes solution. Finally, a Bayes procedure derived in the Appendix allows “mixing” and this is conjectured to be the unrestricted Bayes solution; the improvements due to “mixing” are shown to be small in Table III. Several applications of the general problem of group-testing are discussed in the introduction and this virtually constitutes a general review of the subject. After the procedure R (1) is defined a detailed illustration is given in section 2.1 showing how to carry out this procedure with the use of Tables I and II. Lower bounds for the Bayes risk associated with any group te...

••

TL;DR: This paper presents two definitions of maximum cluster, that have sometimes been used to test for non-random clustering, and shows that when the clustering interval is sufficiently small, one of the tests is more powerful for a wide class of alternative hypotheses.

Abstract: This paper presents two definitions of maximum cluster, that have sometimes been used to test for non-random clustering. We compare the power of the testa based on these statistics, and show that when the clustering interval is sufficiently small, one of the tests is more powerful for a wide class of alternative hypotheses. We show that this test is the generalized likelihood ratio test for an alternative hypothesis related to a particular type of non-random clustering.

••

TL;DR: In this paper, a discriminant function is based on samples which are assumed to be correctly classified and if some members of the original samples are incorrectly classified the utility of the discriminant functions may not be seriously affected.

Abstract: Discriminant functions are based on samples which are assumed to be correctly classified. If some members of the original samples are incorrectly classified the utility of the discriminant function may not be seriously affected.

••

RAND Corporation

^{1}TL;DR: In this article, the problem of estimating reliability of a system undergoing development testing is examined, where the test program is conducted in K stages and similar items are tested within each stage.

Abstract: The problem of estimating reliability of a system undergoing development testing is examined. It is assumed that the test program is conducted in K stages and that similar items are tested within each stage. In addition, it is assumed that the probability of an inherent failure, q 0, remains constant throughout the test program while the probability of an assignable cause failure in the i-th stage, qi , does not increase with i. The number of inherent failures, of assignable cause failures, and of successes is recorded in each stage. Maximum likelihood estimates of q 0, q i (i = 1, 2, …,K) and a conservative confidence bound for the reliability in the K-th stage are obtained. Numerical examples to illustrate the methods are given.

••

TL;DR: In this article, a simple method for obtaining exact lower confidence bounds for reliabilities (tail probabilities) for items whose life times follow a Weibull distribution where both the shape and scale parameters are unknown is presented.

Abstract: This paper presents a simple method for obtaining exact lower confidence bounds for reliabilities (tail probabilities) for items whose life times follow a Weibull distribution where both the “shape” and “scale” parameters are unknown. These confidence bounds are obtained both for the censored and non-censored cases and are asymptotically efficient. They are exact even for small sample sizes in that they attain the desired confidence level precisely. The case of an additional unknown “location” or “shift” parameter is also discussed in the large sample case. Tables are given of exact and asymptotic lower confidence bounds for the reliability for sample sizes of 10, 15, 20, 30, 50 and 100 for various censoring fractions.

••

TL;DR: In this article, the substrate of a 2 n − q factorial or fractional factorial experiment may be expected to show trends representable by linear and quadratic terms in time, and certain orderings spaced at equal time-intervals permit better estimation of the effects than do others.

Abstract: When the substrate of a 2 n–q factorial or fractional factorial experiment may be expected to show trends representable by linear and quadratic terms in time, then certain orderings spaced at equal time-intervals permit better estimation of the effects than do others. Some of these ordered plans are given for p – q = 2, 3, 4, 5. Simple methods are given for computing effects, trends, and efficiencies.

••

TL;DR: The best ebooks about Introduction To Probability And Statistics From A Bayesian Viewpoint Part 2 Inference that you can get for free here by download this Introduction to Probability and Statistics From a Bayesian viewpoint Part 1 Inference and save to your desktop as mentioned in this paper.

Abstract: The best ebooks about Introduction To Probability And Statistics From A Bayesian Viewpoint Part 2 Inference that you can get for free here by download this Introduction To Probability And Statistics From A Bayesian Viewpoint Part 2 Inference and save to your desktop. This ebooks is under topic such as introduction to probability and statistics from a bayesian cambridge university press viewpoint, part 2 inference d p(aib) p(a) p(,4ib) p(,4~ih) p(a tilt) cambridge university press viewpoint, part 2 inference d introduction to probability and statistics by mendenhall introduction to bayesian inference: selected resources introduction to probability and statistics using r inferences about correlations springer literatur qber bayes-veriahren springer foundations of statistical inference the bayesian approach to statistics by p g moore, td statistical inference: the big picture comment: the em parametric image reconstruction algorithm introduction to the bayesian analysis of clinical trials syllabus for statistics 157: bayesian statistics fundamentals of clinical research for radiologists 1 mth709u/mthm042 bayesian statistics qmul maths comparison of parameters of lognormal distribution based are all dodge dart manual masomo mas8303 modern bayesian inference part 2 tests of equality of parameters of two normal populations a bayesian alternative to parametric hypothesis testing introduction to bayesian data analysis 2: exchangeability a bayesian approach to assessing lab proficiency with a class of loss functions of catenary form springer lexus repair manuals nwdata statistical theory welcome to the department of statistics introduction to bayesian analysis university of prince 2004 acura tsx shock bushing manual alilee american political rhetoric 5th masomo probability and statistics eolss free download handbook of probability book university college of the fraser valley geo joke 2002 nasco answers 44 taniis if there be love zewaar

••

TL;DR: Inequalities for Stochastic Processes: How to Gamble if You Must as discussed by the authors, a book about how to play games with stochastic processes, was published in 1966.

Abstract: (1966). How to Gamble if You Must: Inequalities for Stochastic Processes. Technometrics: Vol. 8, No. 4, pp. 713-713.

••

TL;DR: In this article, a randomisation procedure for making tests on the coefficients of regression equations developed from data which arise from a general multivariate distribution, not necessarily Normal, is given.

Abstract: A randomisation procedure is given for making tests on the coefficients of regression equations developed from data which arise from a general multivariate distribution, not necessarily Normal. The procedure makes use of a six card computer subroutine which randomly permutes an array of numbers. This subroutine is extremely useful and can be employed in many other types of problems.

••

TL;DR: In this article, the mathematical structure of nested designs is presented using, for a general case, a four-stage nested design, where the variables at each stage are normally distributed, analytical results can be obtained.

Abstract: A development of the mathematical structure of nested designs is presented using, for a general case, a four-stage nested design. For the casea where the variables at each stage are normally distributed, analytical results can be obtained. This is done for the unbalanced “staggered” and “inverted” designns. Empirical estimattas of variance for the non-normal case are obtained. These are compared with the analytical solutions. Also considered is the probability of negative variance estimates. It ia interesting to note that for these alternatives to balanced nested designs, one can decrease the probabilities of negative estimates of some variances at the cost of increasing them for others.

••

TL;DR: In this article, a method for reducing a curvilinear response to a set of numbers which describe the curve is presented, and an analysis of such numbers including reconstruction of the curves is presented.

Abstract: A method is presented for reducing a curvilinear response to a set of numbers which describe the curve. An analysis of such numbers including reconstruction of the curves is presented. A numerical example is used to illustrate the entire procedure.

••

TL;DR: In this article, an iterative procedure is given for the estimation of a vector of parameters some of which may be common to more than one of a set of nonlinear regression equations in the model presented.

Abstract: An iterative procedure is given for the estimation of a vector of parameters some of which may be common to more than one of a set of nonlinear regression equations in the model presented. The procedure is given for the situation when the covariance matrix in the model is known and is extended to include the model when the covariance matrix is unknown. The covariance matrix allows for correlation between observations made on different regression equations but for the same value of the independent vector of variables. Under certain assumptions the procedure is shown to converge to a consistent estimator of the vector of parameters with probability one. The procedure is illustrated with data from a compartmental tracer experiment.

••

TL;DR: Recently, interest in the Weibull distribution has been stimulated by the realisation that life testing procedures based on the exponential distribution are very sensitive to departures from this distribution as discussed by the authors.

Abstract: have been much discussed in the statistical literature, notably in a series of papers by Gumbel, which culminated in Gumbel (1958). The context in which such a discussion has taken place has usually been that of exceptional phenomena such as floods, droughts, etc. and the distribution has been little used as a model for the description of ordinary events. Recently, however, interest in this distribution has been stimulated by the realisation that life testing procedures based on the exponential distribution are very sensitive to departures from this distribution (see, for example, Zelen and Dannemiller, 1961) and that the Weibull distribution

••

TL;DR: In this article, the results of a numerical investigation of six approximations to the cumulative negative binomial distribution were presented, including two Poissons, Poisson Gram-Ch...

Abstract: This paper presents the results of a numerical investigation of six approximations to the cumulative negative binomial distribution. The approximations studied include two Poissons, Poisson Gram-Ch...

••

TL;DR: In this paper, a method for construction of cumulative sum control charts for controlling the mean of a Weibull distribution is described, and the results of using such charts when a non-exponential Webull distribution would be more appropriate are investigated.

Abstract: A method for construction of cumulative sum control charts for controlling the mean of a Weibull distribution is described As a special case, charts appropriate to exponentially distributed variables can be constructed There is some investigation of the results of using such charts when a non-exponential Weibull distribution would be more appropriate The paper concludes with a discussion of certain formulas in the related analysis of sequential probability ratio tests for Weibull distributions

••

TL;DR: Evolutionary operation, as originally presented by G. E. P. Box in 1957, is now an accepted means of improving the performance of industrial processes as mentioned in this paper, and numerous articles have been published relating to the successful application of this procedure.

Abstract: Evolutionary operation, as originally presented by G. E. P. Box in 1957, is now an accepted means of improving the performance of industrial processes. Numerous articles have since been published relating to the successful application of this procedure and they indicate that the technique is one of general industrial importance. It is the purpose of this article to review briefly these applications, thus providing a source of references useful both to those familiar with EVOP and those wishing to examine the potential applicability of the method for an existing process.

••

TL;DR: It is shown that the rule of thumb of sampling at twice the Nyquist frequency is a good one and as a function of sampling rate for two classes of spectra is calculated.

Abstract: The expected number of maxima and level crossings of a continuous stationary Gaussian process and the discrete process obtained by sampling the continuous one are evaluated and compared. The ratio of these two values as a function of sampling rate for two classes of spectra is calculated. It is shown that the rule of thumb of sampling at twice the Nyquist frequency is a good one.