scispace - formally typeset
Search or ask a question

Showing papers in "Technometrics in 1983"


Journal ArticleDOI
TL;DR: In this article, a generalized ESD many-outlier procedure is proposed for detecting from 1 to k outliers in a data set, which is shown to be adequately accurate using Monte Carlo simulation.
Abstract: A generalized (extreme Studentized deviate) ESD many-outlier procedure is given for detecting from 1 to k outliers in a data set. This procedure has an advantage over the original ESD many-outlier procedure (Rosner 1975) in that it controls the type I error both under the hypothesis of no outliers and under the alternative hypotheses of 1, 2, …. k-l outliers. A method is given for approximating percentiles for this procedure based on the t distribution. This method is shown to be adequately accurate using Monte Carlo simulation, for detecting up to 10 outliers in samples as small as 25. Tables are given for implementing this method for n = 25(1)50(10)100(50)500; k = 10, α = .05, .Ol, .005.

1,056 citations


Journal ArticleDOI
TL;DR: Genesis: an historical background basic properties expansions and algorithms characterizations sampling distributions limit theorems and expansions normal approximations to distributions order statistics from normal samples the bivariate normal distribution Bivariate normal sampling distributions point estimation statistical intervals as discussed by the authors.
Abstract: Genesis: an historical background basic properties expansions and algorithms characterizations sampling distributions limit theorems and expansions normal approximations to distributions order statistics from normal samples the bivariate normal distribution bivariate normal sampling distributions point estimation statistical intervals.

339 citations



Journal ArticleDOI
TL;DR: Some of the advances made during the past 25 years in the statistical treatment of reliability problems are reviewed and some where work is needed are suggested.
Abstract: Some of the advances made during the past 25 years in the statistical treatment of reliability problems are reviewed. The impact of statistical methods on reliability is discussed, and some areas where work is needed are suggested.

250 citations


Journal ArticleDOI

178 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present a method for estimating and testing hypotheses in competing risk analysis, using a Probabilistic Approach. But their method is limited to the case of complete Mortality Data and incomplete Mortality data and does not consider follow-up studies.
Abstract: SURVIVAL MEASUREMENTS AND CONCEPTS. Survival Data. Measures of Mortality and Morbidity. Ratios, Proportions, and Means. Survival Distributions. MORTALITY EXPERIENCES AND LIFE TABLES. Life Tables: Fundamentals and Construction. Complete Mortality Data. Estimation of Survival Function. Incomplete Mortality Data: Follow-Up Studies. Fitting Parametric Survival Distributions. Comparison of Mortality Experiences. MULTIPLE TYPES OF FAILURE. Theory of Competing Causes: Probabilistic Approach. Multiple Decrement Life Tables. Single Decrement Life Tables Associated with Multiple Decrement Life Tables: Their Interpretation and Meaning. Estimation and Testing Hypotheses in Competing Risk Analysis. SOME MORE ADVANCED TOPICS. Concomitant Variables in Lifetime Distributions Models. Age of Onset Distributions. Models of Aging and Chronic Diseases. Indexes.

166 citations


Journal ArticleDOI
TL;DR: The developments in linear regression methodology that have taken place during the 25-year history of Technometrics are summarized in this paper, where the major topics covered are variable selection, biased estimation, robust estimation, and regression diagnostics.
Abstract: The developments in linear regression methodology that have taken place during the 25-year history of Technometrics are summarized. Major topics covered are variable selection, biased estimation, robust estimation, and regression diagnostics.

165 citations


Journal ArticleDOI
TL;DR: In this article, a method for approximating the moments and percentage points of the run length distribution of one-sided CUSUM procedures for continuous random variables is given, where run length probabilities are calculated recursively using numerical quadrature until the ratio of successive probabilities stabilizes.
Abstract: A method is given for approximating the moments and percentage points of the run length distribution of one-sided CUSUM procedures for continuous random variables. The run length probabilities are calculated recursively using numerical quadrature until the ratio of successive probabilities stabilizes. This ratio and the probabilities of low run length values are then used to approximate parameters of the run length distribution. The accuracy of this method is compared with that of previous methods in examples involving normally distributed random variables.

155 citations


Journal ArticleDOI
TL;DR: In this article, the Galil and Kiefer option for constructing initial designs and Powell's optimization method for design augmentation are discussed. And empirical evidence for improving single-point methods are given.
Abstract: Some problems unique to the construction of N-point D-optimal designs on convex design spaces are considered. Multiple-point augmentation and exchange algorithms are shown to be more costly and less efficient than the analogous single-point procedures. Moreover, some recommendations for improving single-point methods are given. Finally, empirical evidence is found that supports the Galil and Kiefer option for constructing initial designs and Powell's optimization method for design augmentation.

139 citations


Journal ArticleDOI
TL;DR: In this paper, a Course in the Theory of Stochastic Processes (TSP) is presented, which is a course in the theory of stochastic processes.
Abstract: (1983). A Course in the Theory of Stochastic Processes. Technometrics: Vol. 25, No. 1, pp. 116-116.

130 citations



Journal ArticleDOI
TL;DR: In this article, a method for constructing confidence intervals for a binomial parameter upon termination of a sequential or multistage test is described for use with the MIL-STD 105D multiple sampling plans for acceptance sampling.
Abstract: This paper describes a method for constructing confidence intervals for a binomial parameter upon termination of a sequential or multistage test Tables are presented for use with the MIL-STD 105D multiple sampling plans for acceptance sampling Also given are tables for use with some three-stage schemes that have been proposed in connection with biomedical trials The results are compared with confidence intervals calculated as if the sampling plan had been one with a fixed sample size


Journal ArticleDOI
TL;DR: In this article, maximum likelihood methods are used to estimate the parameters of two separate multiple regressions that switch at an unknown point in the data and a conservative bound on the null distribution function of the test statistic is derived based on an improved Bonferroni inequality.
Abstract: Maximum likelihood methods are used to estimate the parameters of two separate multiple regressions that switch at an unknown point in the data. Normal errors with constant variance are assumed and likelihood ratio statistics are used to test for the presence of two separate regressions. Our main result is a conservative bound on the null distribution function of the test statistic. This bound is based on an improved Bonferroni inequality, and a simple power-series approximation is provided. Similar bounds are given for likelihood ratio statistics that test for a shift in the constant term of the regression only. The accuracies of the bounds and approximations are evaluated on a number of examples.

Journal ArticleDOI
TL;DR: In this article, a method based on maximum likelihood estimation of the parameters is proposed for constructing confidence bands for cumulative distribution functions, which is based on the classical Kolmogorov-Smirnov test for an empirical distribution function.
Abstract: Previously suggested methods for constructing confidence bands for cumulative distribution functions have been based on the classical Kolmogorov-Smirnov test for an empirical distribution function. This paper gives a method based on maximum likelihood estimation of the parameters. The method is described for a general continuous distribution. Detailed results are given for a location-scale parameter model, which includes the normal and extreme-value distributions as special cases. Results are also given for the related lognormal and Weibull distributions. The formulas derived for these distributions give a band with exact confidence coefftcient. A chi-squared approximation, which avoids the use of special tables, is also described. An example is used to compare the resulting bands with those obtained by previously published methods.

Journal ArticleDOI
TL;DR: In this paper, a diagnostic method for assessing the degree to which individual cases and groups of cases influence the Box-Cox likelihood estimate of the transformation parameter for the response variable in linear regression models is described.
Abstract: We describe a diagnostic method for assessing the degree to which individual cases and groups of cases influence the Box-Cox likelihood estimate of the transformation parameter for the response variable in linear regression models. We compare the method to a method proposed by Atkinson (1982) and sketch the extension to explanatory variables. We present two examples.




Journal ArticleDOI
TL;DR: In this article, two new tests for the two-parameter exponential distribution are presented, which can be used with doubly censored samples, are easy to compute, need no special constants, and have high power compared with several competing tests.
Abstract: Two new tests for the two-parameter exponential distribution are presented. The test statistics can be used with doubly censored samples, are easy to compute, need no special constants, and have high power compared with several competing tests. The first test statistic is sensitive to monotone hazard functions, and its percentage points can be closely approximated by the standard normal distribution. The second test statistic is sensitive to nonmonotone hazard functions. The chi-squared (2 degrees of freedom) distribution can be used as an approximation to the distribution of this statistic for moderate and large sample sizes. Monte Carlo power estimates and an example are given.


Journal ArticleDOI
TL;DR: In this article, an approximate F-test for the equality of two gamma distribution scale parameters, given equal but unknown shape parameters, is proposed and investigated by replacing the shape parameter by its maximum likelihood estimate.
Abstract: An approximate F-test for the equality of two gamma distribution scale parameters, given equal but unknown shape parameters, is proposed and investigated. The test is obtained by replacing the shape parameter by its maximum likelihood estimate. Monte Carlo and asymptotic results show that in many cases this substitution does not seriously affect the nominal test size. Additionally, these results can be used to modify the test to more closely achieve the desired size. An example from a cloud seeding experiment is given.

Journal ArticleDOI
TL;DR: In this article, a power transformation of the data is described, and diagnostic methods for detecting the influence of individual observations on the transformation are presented. But the emphasis is on plots as diagnostic tools.
Abstract: Diagnostic displays for outlying and influential observations are reviewed. In some examples apparent outliers vanish after a power transformation of the data. Interpretation of the score statistic for transformations as regression on a constructed variable makes diagnostic methods available for detection of the influence of individual observations on the transformation, Methods for the power transformation are exemplified and extended to the power transformation after a shift in location, for which there are two constructed variables. The emphasis is on plots as diagnostic tools.


Journal ArticleDOI
TL;DR: A class of continuous sampling plans (CSP's) that switch between full and partial inspection of items in a production line is formulated in terms of discrete renewal processes, finding that AOQ greatly overestimates AOq(t), for short runs, while the approximation AOZ*(t) is found to be sufficiently accurate in situations corresponding to actual practice.
Abstract: A class of continuous sampling plans (CSP's) that switch between full and partial inspection of items in a production line is formulated in terms of discrete renewal processes. The renewal-theory framework facilitates studying both the long-run average outgoing quality (AOQ) and the average outgoing quality in a short production run of length t, AOQ(t). Renewal theory also leads to a computable approximation, AOQ*(t), to AOQ(t). By simulation it is found that AOQ greatly overestimates AOQ(t), for short runs, while the approximation AOQ*(t) is found to be sufficiently accurate in situations corresponding to actual practice. Formulas are derived enabling one to compute AOQ and AOQ*(t) for the Dodge sampling plans CSP-1 through CSP-5. Numerical illustrations for selected CSP's are presented.

Journal ArticleDOI
TL;DR: In this article, the authors compared the OLS estimator with the corrected least squares estimator (CLS) using the minimum asymptotic mean squared error criterion, and concluded that the CLS estimator should be preferred.
Abstract: The ordinary least squares (OLS) estimator of the regression parameter in the simple errors-in-variables model with known variance of the observational error is compared with a well-known consistent estimator—the corrected least squares (CLS) estimator—using the minimum asymptotic mean squared error criterion. Conditions for one estimator to dominate the other are derived and illustrated graphically. For many empirical situations, the consequence of an erroneous choice in favor of OLS appears to be more serious than that of an erroneous choice of CLS. Therefore, CLS estimation should be recommended in general. Next, we reexamine this conclusion, assuming that an incorrect value for the variance of the measurement error is used to compute the CLS estimate. Our conclusion is then also that CLS estimation should be preferred, except when the error variance is highly overestimated.

Journal ArticleDOI
TL;DR: The most common tools for regression criticism were plots of residuals against various quantities such as fitted values, and probability plots as mentioned in this paper, and each of these was intended to serve a number of purposes, providing information on outliers, linearity, heteroscedasticity, the need to transform, and perhaps some notion of influence, depending on the pattern in the plot.
Abstract: Ron Hocking's own work on regression has played such an important role in the development of regression methods that it is very fitting for him to have written this review of the last quarter-century of advances. His review paper on selection methods in Biometrics (Hocking 1976) got me interested in that problem and possibly in regression in general, and that paper, like the current one, is exemplary. Both survey an area, but still leave many questions unanswered. In fitting linear regression models we make many assumptions such as linearity, constant variance, and perhaps normality. We assume that relevant variables are measured, and that these do not need to be transformed. As Hocking has pointed out, the precomputer approach of taking the assumptions as given and correct is no longer accepted statistical practice. Methods for criticism of assumptions and of influence analysis have now become standard, as clearly indicated by the proportion of Hocking's review that is dedicated to such methods. Hocking does note, however, that the array of such techniques that are available to the analyst is large, so the choice of appropriate and useful measures is not always clear. The confusion has several sources. The whole methodology of regression criticism has developed very quickly. Before the last decade, the most common tools for criticism were plots of residuals against various quantities such as fitted values, and probability plots. Each of these was intended to serve a number of purposes, providing information on outliers, linearity, heteroscedasticity, the need to transform, and perhaps some notion of influence, depending on the pattern in the plot. On the other hand, the recently developed or rediscovered methods for criticism seem to address specific issues, and each of these methods may require computation of statistics useful for that one method only. At the same time, methods have been developed that are probably not generally useful, but many regression analysts are not sufficiently knowledgeable to tell the good ones from the not-so-good ones. My purpose in these remarks is to present some guidelines for developing, and using, methods for regression criticism. I address separately what I call diagnostics (model criticism) and influence analysis (data criticism). Before proceeding, it may be well to point out that not everyone agrees with the importance of these methods. Some think that the methods do little more than allow analysts to make quick but superficial decisions concerning data. I obviously do not agree with this view and have found that intelligent application of these methods can be very useful in practice. In any case, the discussion following Atkinson (1982) is illuminating on this issue.

Journal ArticleDOI
TL;DR: The centroid of a constraint region has traditionally been defined as the average of all extreme vertices of the region as mentioned in this paper, which differs from the classical physics definition of a centroid as the center of mass (or volume) of a region.
Abstract: In constrained mixture experiments the centroid of a constraint region has traditionally been defined as the average of all extreme vertices of the region. This differs from the classical physics definition of a centroid as the center of mass (or volume) of a region. An algorithm for calculating a centroid based on the center of mass definition is discussed and illustrated with an example. This centroid calculation technique can be used to calculate centroids of various dimensional faces and edges of the constraint region as well as of the overall centroid. Results of the center-of-mass and averaged-extreme-vertices centroid computation techniques are compared using examples from the literature.

Journal ArticleDOI
TL;DR: In this article, the authors discuss several guidelines for developing mixture constraints and present techniques for checking the consistency of the constraints developed using several examples, and illustrate the guidelines and techniques with several examples.
Abstract: Physical, theoretical, and economic considerations in mixture experimentation often impose constraints on the levels of components in the mixture. These constraints define the region of interest and hence play an important role in the design and analysis of the mixture experiment. Because of this important role, sufficient care must be taken in developing the set of constraints. This article discusses several guidelines for developing mixture constraints and presents techniques for checking the consistency of the constraints developed. The guidelines and techniques are illustrated using several examples.

Journal ArticleDOI
TL;DR: In this paper, a design optimality criterion, tr (L)-optimality, is applied to the problem of designing two-level multifactor experiments to detect the presence of interactions among the controlled variables.
Abstract: A design optimality criterion, tr (L)-optimality, is applied to the problem of designing two-level multifactor experiments to detect the presence of interactions among the controlled variables. We give rules for constructing tr (L)-optimal foldover designs and tr (L)-optimal fractional factorial designs. Some results are given on the power of these designs for testing the hypothesis that there are no two-factor interactions. Augmentation of the tr (L)-optimal designs produces designs that achieve a compromise between the criteria of D-optimality (for parameter estimation in a first-order model) and tr (L)-optimality (for detecting lack of fit). We give an example to demonstrate an application to the sensitivity analysis of a computer model.