scispace - formally typeset
Search or ask a question

Showing papers in "Technometrics in 1985"


Journal ArticleDOI
TL;DR: In this paper, the authors use the method of probability-weighted moments to derive estimators of the parameters and quantiles of the generalized extreme-value distribution, and investigate the properties of these estimators in large samples via asymptotic theory, and in small and moderate samples, via computer simulation.
Abstract: We use the method of probability-weighted moments to derive estimators of the parameters and quantiles of the generalized extreme-value distribution. We investigate the properties of these estimators in large samples, via asymptotic theory, and in small and moderate samples, via computer simulation. Probability-weighted moment estimators have low variance and no severe bias, and they compare favorably with estimators obtained by the methods of maximum likelihood or sextiles. The method of probability-weighted moments also yields a convenient and powerful test of whether an extreme-value distribution is of Fisher-Tippett Type I, II, or III.

1,275 citations


Journal ArticleDOI
Peter McCullagh1
TL;DR: In this paper, the analysis of ordinal categorical data tells that any book will give certain knowledge to take all benefits and add more knowledge of you to life and work better.
Abstract: From the combination of knowledge and actions, someone can improve their skill and ability. It will lead them to live and work much better. This is why, the students, workers, or even employers should have reading habit for books. Any book will give certain knowledge to take all benefits. This is what this analysis of ordinal categorical data tells you. It will add more knowledge of you to life and work better. Try it and prove it.

683 citations


Journal ArticleDOI

638 citations


Journal ArticleDOI
TL;DR: In this article, it is shown that an MCUSUM procedure is often preferable to Hotelling's TZ procedure for the case in which the quality characteristics are bivariate normal random variables.
Abstract: It is a common practice to use, simultaneously, several one-sided or two-sided CUSUM procedures of the type proposed by Page (1954). In this article, this method of control is considered to be a single multivariate CUSUM (MCUSUM) procedure. Methods are given for approximating parameters of the distribution of the minimum of the run lengths of the univariate CUSUM charts. Using a new method of comparing multivariate control charts, it is shown that an MCUSUM procedure is often preferable to Hotelling's TZ procedure for the case in which the quality characteristics are bivariate normal random variables.

394 citations


Journal ArticleDOI
TL;DR: Design and implementation procedures for counted data CUSUM's (these are sometimes called C USUM's for attributes) are described, which are easy to design and implement and can be used to detect both increases and decreases in the count level.
Abstract: Cumulative Sum (CUSUM) control schemes are widely used in industry for process and measurement control. Most CUSUM applications have been for continuous variables. There have been fewer uses of CUSUM control schemes when the response is a count such as the number of defects per unit or the occurrence of an accident. This article describes design and implementation procedures for counted data CUSUM's (these are sometimes called CUSUM's for attributes). These CUSUM's are easy to design and implement; they can be used to detect both increases and decreases in the count level. Enhancements to the CUSUM scheme, including the fast initial response (FIR) feature and the robust CUSUM are discussed. These enhancements speed up the detection of changes in the count level and guard against the effects of atypical or outlier observations.

372 citations


Journal ArticleDOI
TL;DR: In this article, modern multivariate statistical analysis: A Graduate Course and Handbook is presented, with a focus on the use of statistical analysis in the context of multivariate analysis of statistical models.
Abstract: (1987). Modern Multivariate Statistical Analysis: A Graduate Course and Handbook. Technometrics: Vol. 29, No. 2, pp. 242-243.

200 citations


Journal ArticleDOI
TL;DR: In this paper, the analysis of Messy Data, Volume I: Designed Experiments, Volume 7, No. 4, pp. 440-440, is presented as an example.
Abstract: (1985). Analysis of Messy Data, Volume I: Designed Experiments. Technometrics: Vol. 27, No. 4, pp. 440-440.

172 citations


Journal ArticleDOI
TL;DR: In this paper, a Monte Carlo analysis of a location parameter in the potential presence of outliers is performed by means of the Monte Carlo method, which yields Monte Carlo variances of the arithmetic mean after rejection of the outliers according to several formal rules.
Abstract: In the past, methods for rejection of outliers have been investigated mostly without regard to the quantitative consequences for subsequent estimation or testing procedures. Moreover, although rejection of outliers with subsequent application of least squares methods is one of the oldest and most widespread classes of robust procedures, until recently no comparison was made with other robust methods. In this article the simplest situation, namely estimation of a location parameter in the potential presence of outliers, is treated by means of a Monte Carlo study. This study yields Monte Carlo variances of the “arithmetic mean” after rejection of outliers according to several classical and recent formal rules. The results are also compared with those for other robust estimators of location parameters. It turns out that a simple summary and theoretical explanation of the Monte Carlo results is provided by the breakdown points of the combined rejection-estimation procedures. As a by-product, the concept of br...

166 citations


Journal ArticleDOI
TL;DR: In this article, it was shown that for the case of exponentially distributed observations, the Page equation can be solved without resorting to approximations, and the authors provided some tables of average run lengths for the exponential case and comment on an application of exponential CUSUM charts to controlling the intensity of a Poisson process.
Abstract: Page (1954) originally noted that it is possible to find an integral equation whose solution gives average run lengths for one-sided CUSUM schemes. Lucas and Crosier (1982), for the case of normally distributed observations, have obtained numerical solutions to Page's integral equation and used these in their study of so called fast-initial-response CUSUM charts. In this article we show that for the case of exponentially distributed observations, the Page equation can be solved without resorting to approximations. We then provide some tables of average run lengths for the exponential case and comment on an application of exponential CUSUM charts to controlling the intensity of a Poisson process.

157 citations


Journal ArticleDOI
TL;DR: The User's Guide to Multidimensional Scaling as mentioned in this paper is a user's guide to multidimensional scaling, which is based on the idea of scaling a set of dimensions.
Abstract: (1985). The User's Guide to Multidimensional Scaling. Technometrics: Vol. 27, No. 1, pp. 87-88.

155 citations


Journal ArticleDOI
TL;DR: In this paper, two uncertainty analysis techniques were applied to a mathematical model that estimates the dose-equivalent to man from the concentration of radioactivity in air, water, and food.
Abstract: Two techniques of uncertainty analysis were applied to a mathematical model that estimates the dose-equivalent to man from the concentration of radioactivity in air, water, and food. The response-surface method involved screening of the model to determine the important parameters, development of the response-surface equation, calculating the moments using the response-surface model, and fitting a Pearson or Johnson distribution using the calculated moments. The second method sampled model inputs by Latin hypercube methods and iteratively simulated the model to obtain an empirical estimation of the cdf. Comparison of the two methods indicates that it is often difftcult to ascertain the adequacy or reliability of the response-surface method. The empirical method is simpler to implement and, because all model inputs are included in the analysis, it is also a more reliable estimator of the cumulative distribution function of the model output than the response-surface method.

Journal ArticleDOI
TL;DR: An approximation to the exact likelihood for Gaussian PARMA processes is developed, and a straightforward algorithm for its maximization is presented.
Abstract: A useful class of models for seasonal time series that cannot be filtered or standardized to achieve second-order stationarity is that of periodic autoregressive moving average (PARMA) models, which are extensions of ARMA models that allow periodic (seasonal) parameters. An approximation to the exact likelihood for Gaussian PARMA processes is developed, and a straightforward algorithm for its maximization is presented. The algorithm is tested on several periodic ARMA(l, 1) models through simulation studies and is compared to moment estimation via the seasonal Yule–Walker equations. Applicability of the technique is demonstrated through an analysis of a seasonal stream-flow series from the Rio Caroni River in Venezuela.

Journal ArticleDOI
TL;DR: The problems of how to reconcile the measurements so that they satisfy the constraints and how to use the reconciled values to detect gross errors are considered in this article.
Abstract: Measurements made on stream flows in a chemical process network are expected to satisfy mass and energy balance equations in the steady state. Because of the presence of random and possibly gross errors, these balance equations are not generally satisfied. The problems of how to reconcile the measurements so that they satisfy the constraints and how to use the reconciled values to detect gross errors are considered in this article. Reconciliation of measurements is usually based on weighted least squares estimation under constraints, and detection of gross errors is based on the residuals obtained in the reconciliation step. The constraints resulting from the network structure introduce certain identifiability problems in gross error detection. A thorough review of such methodologies proposed in the chemical engineering literature is given, and those methodologies are illustrated by examples. A number of research problems of potential interest to statisticians are outlined.

Journal ArticleDOI
TL;DR: In this article, simple second-order composite designs for k = 5, 7, and 9 factors were proposed for the first time, with one run fewer than Westlake's design for K = 5 and 7 and three fewer for 9 factors.
Abstract: Small second-order composite designs were suggested by Hartley (1959). Westlake (1965) provided even smaller designs for k = 5, 7, and 9 factors, for which intricate construction methods were needed. Here, simple designs formed using Plackett and Burman (1946) designs are offered for k = 5,7, and 9. Designs with one run fewer than Westlake's for k = 5 and 7 and three fewer for k = 9 are feasible by deleting repeat points that occur in some of the designs.

Journal ArticleDOI
TL;DR: In this paper, a numerically stable implementation of the Gauss-Newton method for computing least squares estimates of parameters and variables in explicit nonlinear models with errors in the variables is proposed.
Abstract: A numerically stable implementation of the Gauss-Newton method for computing least squares estimates of parameters and variables in explicit nonlinear models with errors in the variables is proposed. The algorithm uses only orthogonal transformations and exploits the special structure of the problem. Moreover, a partially regularized Marquardt-like version is described that works with a reasonable overhead of arithmetic operations and storage compared to the error-free case.

Journal ArticleDOI
TL;DR: In this paper, nonlinear least squares estimation procedures are proposed for estimating the parameters of the generalized lambda distribution, which are compared with other methods by making Monte Carlo experiments and a numerical example is also given to illustrate the proposed method.
Abstract: Nonlinear least squares estimation procedures are proposed for estimating the parameters of the generalized lambda distribution. The procedures are compared with other methods by making Monte Carlo experiments. A numerical example is also given to illustrate the proposed method.

Journal ArticleDOI
TL;DR: In this article, two approaches to robust estimation for the Box-Cox power-transformation model were considered, one approach maximizes weighted, modified likelihoods, and the other approach bounds a measure of gross-error sensitivity.
Abstract: We consider two approaches to robust estimation for the Box–Cox power-transformation model. One approach maximizes weighted, modified likelihoods. A second approach bounds a measure of gross-error sensitivity. Among our primary concerns is the performance of these estimators on actual data. In examples that we study, there seem to be only minor differences between these two robust estimators, but they behave rather differently than the maximum likelihood estimator or estimators that bound only the influence of the residuals. These examples show that model selection, determination of the transformation parameter, and outlier identification are fundamentally interconnected.

Journal ArticleDOI
TL;DR: A consistent estimator is derived that can be applied to any censoring distribution and is based on a Weibull and an exponential lifetime distribution under the random censoring model.
Abstract: When automobile failures occur within the automotive warranty period, a manufacturer can develop a record of mileages to failure from owners' requests for repair. When no failures occur during the warranty period the owner naturally will not report mileages, and it may be inferred that “no record of failures” means “no failures.” By using a follow-up survey or postal reply cards, data can be acquired to include a partial record of nonfailures. A method of estimating lifetime parameters is proposed for analyzing this kind of data. Assuming a Weibull and an exponential lifetime distribution under the random censoring model, I derive a consistent estimator that can be applied to any censoring distribution.

Journal ArticleDOI
TL;DR: In this paper, the authors investigate experimental designs that minimize a second-order volume approximation, which depends on the noise and confidence levels, and on the parameterization used, and when used sequentially, quadratic designs depend on the residuals from previous experiments.
Abstract: D-optimal experimental designs for precise estimation in nonlinear regression models are obtained by minimizing the determinant of the approximate variance–covariance matrix of the parameter estimates. This determinant may not give a true indication of the volume of a joint inference region for the parameters, however, because of intrinsic and parameter-effects nonlinearity. In this article, we investigate experimental designs that minimize a second-order volume approximation. Unlike D-optimal designs, these designs depend on the noise and confidence levels, and on the parameterization used, and when used sequentially, quadratic designs depend on the residuals from previous experiments and on the type of inference. Quadratic designs appear to be less sensitive to variations in initial parameter values used for design.

Journal ArticleDOI
TL;DR: In this paper, a new approach to multivariate calibration incorporating distributional properties of the observations of the calibration set is proposed, based on estimating the parameters in the best linear predictor in the assumed model.
Abstract: This article concerns multivariate calibration in linear models. The error covariance matrix is assumed to have linear factor structure. A new approach to calibration incorporating distributional properties of the observations of the calibration set is proposed. The proposed predictor is based on estimating the parameters in the best linear predictor in the assumed model. The predictor is tested on two data sets: meat data and fish data.

Journal ArticleDOI
TL;DR: In this article, the authors consider mixture experiments in which the response also depends on the total amount, and develop mixture-amount models appropriate for such situations, where models in the component amounts are also considered and are shown to be reduced forms of the mixture-factor models.
Abstract: The usual definition of a mixture experiment requires that the response depend only on the proportions of the mixture components and not on the total amount of the mixture. We consider mixture experiments in which the response also depends on the total amount, and we develop mixture-amount models appropriate for such situations. Models in the component amounts are also considered and are shown to be reduced forms of the mixture-amount models. Examples are used to illustrate the development, interpretation, and comparison of the models.

Journal ArticleDOI
TL;DR: In this paper, a new statistic, Fk, is proposed for detecting multiple outliers in linear regression, which is incorporated into the following multistage procedure: Initially, a subset of k observations is selected to be tested.
Abstract: A new statistic, Fk , is proposed for detecting multiple outliers in linear regression. This statistic is incorporated into the following multistage procedure: Initially, a subset of k observations is selected to be tested. If Fk is found to be significant, the most extreme observation in the subset as determined by the largest studentized residual is deleted and the test repeated for the (k – 1) observations in the subset using the remaining sample. The procedure is stopped when a test fails to reject the no-outlier hypothesis. A Monte Carlo study is used to evaluate the performance of this procedure.

Journal ArticleDOI
TL;DR: In this paper, a systematic procedure for selecting defining contrasts and confounded effects in p n-m factorial designs is presented. But the procedure is not suitable when some factors are more likely than others to interact, and the selection of defining contrasts to generate a design meeting specified requirements is not straightforward.
Abstract: Fractional factorial designs presented in standard tables are not always suitable when some factors are more likely than others to interact. If the fraction is small compared to the total number of treatments, the selection of defining contrasts to generate a design meeting specified requirements is not straightforward. A systematic procedure is given for selecting defining contrasts and confounded effects in p n-m factorial designs (where p ≥ 2 is prime).

Journal ArticleDOI
TL;DR: In this article, the authors examined the properties of smoothed estimators of the probabilities of misclassification in linear discriminant analysis and compared them with those of the resubstitution, leave-one-out, and bootstrap estimators.
Abstract: This article examines the properties of smoothed estimators of the probabilities of misclassification in linear discriminant analysis and compares them with those of the resubstitution, leave-one-out, and bootstrap estimators. Smoothed estimators are found to have smaller variance than the other estimators and bias that is a function of the amount of smoothing. An algorithm is presented for determining a reasonable level of smoothing as a function of the training sample sizes and the number of dimensions in the observation vector. Using the criterion of unconditional mean squared error, this particular smoothed estimator, called the NS method, appears to offer a reasonable alternative to existing nonparametric estimators.

Journal ArticleDOI
TL;DR: This article considers the problem of selecting the most profitable target value for a continuous production process in which there is a shift in the mean value of the quality characteristic and a heuristic is proposed that yields near-optimal solutions at substantial computational savings.
Abstract: This article considers the problem of selecting the most profitable target value for a continuous production process in which there is a shift in the mean value of the quality characteristic. If the characteristic of a given item falls in value below a given specification level, the item is sold as scrap. Otherwise, it is sold at its regular price. The objective is to select the initial setting and the run size that will maximize the unit profit. Profit per unit is defined as the expected profit from a given run minus the setup cost divided by the run size. Unique optimal solutions are derived. In addition, a heuristic is proposed that yields near-optimal solutions at substantial computational savings.

Journal ArticleDOI
TL;DR: In this paper, a multiresponse estimation procedure for parameters of systems described by As is presented, which features a generalized Gauss-Newton algorithm for optimizing the determinant criterion, efficient evaluation of the expectation function and its derivatives directly from the reaction network or compartment diagram, automatic determination of starting values, and efficient computational procedures for handling linear constraints on the responses.
Abstract: We present a multiresponse estimation procedure for parameters of systems described by ⋅ = As. The procedure features a generalized Gauss–Newton algorithm for optimizing the determinant criterion, efficient evaluation of the expectation function and its derivatives directly from the reaction network or compartment diagram, automatic determination of starting values, and eficient computational procedures for handling linear constraints on the responses.

Journal ArticleDOI
TL;DR: The authors explored the effects of outlier-induced collinearities on the estimation of regression coefficients and showed that these effects can be similar in many respects to those resulting from approximate linear dependencies among the columns of predictor-variable values.
Abstract: When an observation in a regression analysis has very large values on two or more predictor variables, artificial collinearities can be induced. The effects of such collinearities on a regression analysis are not well documented, although they can be shown to be similar in many respects to those resulting from approximate linear dependencies among the columns of predictor-variable values. The purpose of this article is to explore the effects of outlier-induced collinearities on the estimation of regression coefficients.

Journal ArticleDOI
TL;DR: In this paper, the authors developed relatively simple procedures to help the analyst select an appropriate model and detect the effects of such observations on adding a variable into any model, and two examples are given for illustration.
Abstract: The likelihood ratio statistic can be used to determine the significance of an explanatory variable in a generalized linear model. In order to obtain such a statistic, however, we need two sets of iterations for two maximum likelihoods. Moreover, the statistic is not directed to detect influential or outlying observations that affect the importance of the variable considered. Therefore we develop relatively simple procedures to help the analyst select an appropriate model and detect the effects of such observations on adding a variable into any model. Two examples are given for illustration.

Journal ArticleDOI
TL;DR: In this paper, a new derivation for a method to estimate error rates in discriminant analysis is presented, known as the Shrunken D or DS method, the technique is evaluated in a sampling experiment, along with seven other parametric methods, as a means for estimating both the optimal and conditional error rates.
Abstract: A new derivation for a method to estimate error rates in discriminant analysis is presented. Known as the Shrunken D or DS method, the technique is evaluated in a sampling experiment, along with seven other parametric methods, as a means for estimating both the optimal and conditional error rates. The results show that the “best” estimators are not the same for the two types of error rates and that sample size should influence the choice of an estimator.

Journal ArticleDOI
TL;DR: In this paper, the authors proposed a Bayesian method for determining a cutoff limit on X so that, with a guaranteed probability, Y meets its specification limit, where X and Y are two correlated variables, where the measurement of X is easier or less expensive to make than the one on Y.
Abstract: Let X and Y be two correlated variables, where the measurement of X is easier or less expensive to make than the one on Y. Suppose that the Y measurement is required to meet a certain specification limit. Given that the joint distribution of (X, Y) is bivariate normal and given the sufficient statistics from a random sample from this distribution, we propose a Bayesian method for determining a cutoff limit on X so that, with a guaranteed probability, Y meets its specification limit. Monte Carlo simulations are used to evaluate the Bayesian procedure. It is seen to be less conservative than the method of Owen, Li, and Chou (1981), and it is fairly robust under nonnormality. An example using data from one time point to predict results at another time point illustrates the application of the method.