scispace - formally typeset
Search or ask a question

Showing papers in "Technometrics in 1990"


Journal ArticleDOI
TL;DR: In this article, the authors study the study of randomness sampling distributions of data and their relationship with data-relationships, and propose a two-way analysis of variance in Logistic Regression and Nonparametric Tests.
Abstract: PART I: LOOKING AT DATA Looking at Data-Distributions Looking at Data-Relationships Producing Data PART II: PROBABILITY AND INFERENCE Probability: The Study of Randomness Sampling Distributions Introduction to Inference Inference for Distributions Inference for Proportions PART III: TOPICS ON INFERENCE Analysis of Two-Way Tables Inference for Regression Multiple Regression One-Way Analysis of Variance Two-Way Analysis of Variance Additional chapters available on the CD-ROM and the website: Logistic Regression Nonparametric Tests Bootstrap Methods and Permutation Tests Statistics for Quality: Control and Capability

2,778 citations


Journal ArticleDOI
TL;DR: In this article, the authors evaluate the properties of an exponentially weighted moving average (EWMA) control scheme used to monitor the mean of a normally distributed process that may experience shifts away from the target value.
Abstract: Roberts (1959) first introduced the exponentially weighted moving average (EWMA) control scheme. Using simulation to evaluate its properties, he showed that the EWMA is useful for detecting small shifts in the mean of a process. The recognition that an EWMA control scheme can be represented as a Markov chain allows its properties to be evaluated more easily and completely than has previously been done. In this article, we evaluate the properties of an EWMA control scheme used to monitor the mean of a normally distributed process that may experience shifts away from the target value. A design procedure for EWMA control schemes is given. Parameter values not commonly used in the literature are shown to be useful for detecting small shifts in a process. In addition, several enhancements to EWMA control schemes are considered. These include a fast initial response feature that makes the EWMA control scheme more sensitive to start-up problems, a combined Shewhart EWMA that provides protection against both larg...

1,380 citations


Journal ArticleDOI

518 citations


Journal ArticleDOI
TL;DR: In this article, the authors introduce the concepts of random line segments, queues and counters for counting and clumping elements of inference, including random line segment, queues, and counters.
Abstract: Examples, concepts and tools random line segments, queues and counters vacancy counting and clumping elements of inference. Appendices: direct Radon-Nikodym theorem central limit theorem, Poisson limit theorem, ergodic theorem and law of large numbers Shepp's coverage theorem mean content and mean square content of cells formed by Poisson field of random planes multitype branching processes lattice percolation.

451 citations


Journal ArticleDOI

382 citations



Journal ArticleDOI

296 citations


Journal ArticleDOI
TL;DR: The variable sampling interval (VSI) CUSUM chart as mentioned in this paper uses short sampling intervals if there is an indication that the process mean may have shifted and long sampling intervals when there is no indication of a change in the mean.
Abstract: A standard cumulative sum (CUSUM) chart for controlling the process mean takes samples from the process at fixed-length sampling intervals and uses a control statistic based on a cumulative sum of differences between the sample means and the target value. This article proposes a modification of the standard CUSUM scheme that varies the time intervals between samples depending on the value of the CUSUM control statistic. The variable sampling interval (VSI) CUSUM chart uses short sampling intervals if there is an indication that the process mean may have shifted and long sampling intervals if there is no indication of a change in the mean. If the CUSUM statistic actually enters the signal region, then the VSI CUSUM chart signals in the same manner as the standard CUSUM chart. A Markov-chain approach is used to evaluate properties such as the average time to signal and the average number of samples to signal. Results show that the proposed VSI CUSUM chart is considerably more efficient than the standard CUS...

229 citations


Journal ArticleDOI
TL;DR: This book offers a balanced presentation of the theoretical, practical, and computational aspects of nonlinear regression and provides background material on linear regression, including the geometrical development for linear and nonlinear least squares.
Abstract: Wiley-Interscience Paperback Series The Wiley-Interscience Paperback Series consists of selected books that have been made more accessible to consumers in an effort to increase global appeal and general circulation. With these new unabridged softcover volumes, Wiley hopes to extend the lives of these works by making them available to future generations of statisticians, mathematicians, and scientists. "The authors have put together an extraordinary presentation of concepts and methods concerning the use and analysis of nonlinear regression models ...highly recommend[ed] ...for anyone needing to use and/or understand issues concerning the analysis of nonlinear regression models." -Technometrics "[This book] provides a good balance of relevant theory and application with many examples ...[and it] provides the most balanced approach to theory and application appropriate for a first course in nonlinear regression modeling for graduate statistics students." -Mathematical Reviews "[This book] joins a distinguished list of publications with a reputation for balancing technical rigor with readability, and theory with application. [It] upholds tradition ...[and is] a worthwhile reference for the marketing researcher with a serious interest in linear models. " -Journal of Marketing Research This book offers a balanced presentation of the theoretical, practical, and computational aspects of nonlinear regression and provides background material on linear regression, including the geometrical development for linear and nonlinear least squares. The authors employ real data sets throughout, and their extensive use of geometric constructs and continuing examples makes the progression of ideas appear very natural. The book also includes pseudocode for computing algorithms.

229 citations


Journal ArticleDOI

226 citations



Journal ArticleDOI
TL;DR: In this article, Bayesian Statistics: Principles, Models, and Applications are presented. But they do not cover the application of Bayesian statistics in the field of computer vision, which is different from ours.
Abstract: (1990). Bayesian Statistics: Principles, Models, and Applications. Technometrics: Vol. 32, No. 4, pp. 453-454.




Journal ArticleDOI
TL;DR: In this paper, small composite designs for fitting second-order response surfaces have been proposed, where the number of runs is reduced as much as possible while maintaining the ability to estimate all of the terms in the model.
Abstract: Standard composite designs for fitting second-order response surfaces typically have a fairly large number of points, especially when k is large. In some circumstances, it is desirable to reduce the number of runs as much as possible while maintaining the ability to estimate all of the terms in the model. We first review prior work on small composite designs and then suggest some alternatives for k ≤ 10 factors. In some cases, even minimal-point designs are possible.

Journal ArticleDOI
TL;DR: The topic bridges statistics, econo and economics with a multidisciplinary overview covering non-Gaussian models, time and magnitude of changes in dynamic models, image processing based on satellite radiometer measurements, and stationary and nonstationary linear time series models.
Abstract: Contributors present a multidisciplinary overview of the subject covering non-Gaussian models, time and magnitude of changes in dynamic models, image processing based on satellite radiometer measurements, and stationary and nonstationary linear time series models. The topic bridges statistics, econo

Journal ArticleDOI
TL;DR: In this article, two goodness-of-fit statistics, based on measures of linearity for standardized P-P plots, are proposed and simple approximations for percentage points of these statistics are presented for testing the fit of exponential, Gumbel (Weibull), and normal (lognormal) probability models with unknown parameters.
Abstract: Percentage-percentage (P-P) probability plots constructed from standardized observations possess some attractive features that are not shared by more commonly used quantile-quantile (Q-Q) plots. In particular, the identification of viable alternatives to a proposed probability model can be greatly facilitated by displaying curves on P-P plots to represent families of alternative models. A single curve can represent an entire family of alternatives indexed by both location and scale parameters. Two goodness-of-fit statistics, based on measures of linearity for standardized P-P plots, are proposed and simple approximations for percentage points of these statistics are presented for testing the fit of exponential, Gumbel (Weibull), and normal (lognormal) probability models with unknown parameters. Results of extensive Monte Carlo power comparisons with other goodness-of-fit tests are summarized. The proposed tests are shown to have superior power for detecting light-tailed and moderate-tailed alternatives to...

Journal ArticleDOI
TL;DR: In this paper, computer intensive methods for testing hypotheses are presented. But they do not specify a set of test cases, only a few of them have been used to test hypotheses.
Abstract: (1990). Computer Intensive Methods for Testing Hypotheses: An Introduction. Technometrics: Vol. 32, No. 4, pp. 453-453.

Journal ArticleDOI
TL;DR: In this article, the optimal design of experiments is discussed and an overview of the main aspects of the problem are presented.Optimal design of Experiments: An Overview (Y. Dodge, V.V. Fedorov, H.P. Wynn). Optimal Combinatorial Designs.
Abstract: Optimal Design of Experiments: An Overview (Y. Dodge, V.V. Fedorov, H.P. Wynn). Optimal Combinatorial Designs. (Contributors: D. Majumdar C.P. Ting, W.I. Notz D. Collombier A. Hedayat, J. Stufken). Algorithms. (Contributors: A.C. Atkinson, A.N. Donev H. Yonchev A.A. Zhigljavsky). Nearest Neighbour and Cross-Over Designs. (Contributors: K. Afsarinejad, P. Seeger C.-S. Cheng J. Kunert S.M. Lewis, D.J. Fletcher, J.N.S. Matthews). Linear and Nonlinear Models. (Contributors: J. Kleffe M.Y. ElBassiouni, J.F. Seely Y. Dodge, J. Jureckova A. Pazman A.M. Herzberg L. Pronzato, E. Walter K. Chaloner, K. Larntz C.F.J. Wu Y. Dodge, H.P. Wynn). Spatial and Correlated Error Models. (Contributors: V.V. Fedorov, W. Mueller Y. Kettunen, H. Sirvio, O. Varis D. Ylvisaker). Quality Control. (Contributors: F. Pukelsheim I.N. Vuchkov, L.N. Boyadjieva L.L. Pesotchinsky I. Verdinelli, H.P. Wynn). General. (Contributors: V.V. Fedorov, A.C. Atkinson I. Vajda B. Tornsey).

Journal ArticleDOI
TL;DR: In this paper, a Bayesian procedure is presented for estimating the reliability (or availability) of a complex system of independent binomial series or parallel subsystems and components using either test or prior data (perhaps both or neither) at the system, subsystem, and component levels.
Abstract: A Bayesian procedure is presented for estimating the reliability (or availability) of a complex system of independent binomial series or parallel subsystems and components. Repeated identical components or subsystems are also permitted. The method uses either test or prior data (perhaps both or neither) at the system, subsystem, and component levels. Beta prior distributions are assumed throughout. The method is motivated and illustrated by the following problem. It is required to estimate the unavailability on demand of the low-pressure coolant injection system in a certain U.S. commercial nuclear-power boiling-water reactor. Three data sources are used to calculate the posterior distribution of the overall system demand unavailability from which the required estimates are obtained. The sensitivity of the results to the three data sources is examined. A FORTRAN computer program for implementing the procedure is available.


Journal ArticleDOI
TL;DR: In this paper, the authors apply the local influence method of Cook (1986) to assess the effect of small perturbations of continuous data on a specified point prediction from a generalized linear model.
Abstract: Influence diagnostics for predictions from a normal linear model examine the effect of deleting a single case on either the point prediction or the predictive density function. Instead of deleting cases, we apply the local influence method of Cook (1986) to assess the effect of small perturbations of continuous data on a specified point prediction from a generalized linear model. Based on local perturbations of the vector of responses, case weights, explanatory variables, or the components of one case, the diagnostics can detect different kinds of influence. Some of the diagnostics are illustrated with an example and compared to standard diagnostic methods.

Journal ArticleDOI
TL;DR: In this paper, the authors present 28 bar diagrams that illustrate the versatility of the generalized Poisson model and discuss stochastic processes leading to the generalized poisson distribution, including proofs for numerous theorems and confidence intervals.
Abstract: Presents 28 bar diagrams that illustrate the versatility of the generalized Poisson model and discusses stochastic processes leading to the generalized Poisson distribution. Examines theoretical properties that vary in difficulty, includes proofs for numerous theorems, explores confidence intervals

Journal ArticleDOI
TL;DR: This book covers the concepts underlying the use of statistics, cover basic principles and assumptions, and finds out how to select and apply statistical techniques--and properly interpret your results.
Abstract: Now you can get a better understanding of how to use and apply statistics to get the maximum amount of information from your data with this new "how-to" book. Learn the concepts underlying the use of statistics, cover basic principles and assumptions, and find out how to select and apply statistical techniques--and properly interpret your results. A comprehensive reference, this book includes worked-out examples as well as commonly-used formulas and statistical tables. A useful and practical approach to using statistics; requires no previous knowledge of statistics.

Journal ArticleDOI
TL;DR: This article provides a criterion that is easy to compute and is invariant under design rotation, and also easily extends to higher degree models.
Abstract: Rotatability is one of many desirable characteristics of a response-surface design. Recent work (Draper and Guttman 1988; Khuri 1988) has, for the first time, provided ways to measure “how rotatable” a design may be when it is not perfectly rotatable. This had previously been assessed by the viewing of tediously obtained contour diagrams. This article provides a criterion that is easy to compute and is invariant under design rotation. It also easily extends to higher degree models.

Journal ArticleDOI
TL;DR: In this article, an estimator is developed for the case in which data are grouped, based on Pearson's Pearson's divergence, which is defined as the discrepancy between an actual and an assumed statistical model.
Abstract: A quantity, λ2 is defined as the discrepancy between an actual and an assumed statistical model. An estimator is developed for the case in which data are grouped. The estimator is based on Pearson'...

Journal ArticleDOI
TL;DR: The design of blocking: the randomised block design and some mathematical theory for comfounding and fractional replication and Quantitative factors and response functions.
Abstract: Preface Part I. Overture: 1. Introduction 2. Elementary ideas of blocking: the randomised block design 3. Elementary ideas of treatment structure 4. General principles of linear models for the analysis of experimental data 5. Computers for analysing experimental data Part II. First Subject: 6. Replication 7. Blocking 8. Multiple blocking systems and cross-over designs 9. Randomisation 10. Covariance - extension of linear models 11. Model assumptions and more general models Part III. Second Subject: 12. Experimental objectives, treatments and treatment structures 13. Factorial structure and particular forms of effects 14. Split unit designs and repeated measurements 15. Incomplete bloxk size for factorial experiments 16. Some mathematical theory for comfounding and fractional replication 17. Quantitative factors and response functions 18. Response surface exploration Part IV. Coda: 19. Designing useful experiments References Index.

Journal ArticleDOI
TL;DR: In this paper, model discrimination for nonlinear regression models is discussed and a model discrimination model is proposed to discriminate between nonlinear regressions and linear regressions, and the model is tested.
Abstract: (1990). Model Discrimination for Nonlinear Regression Models. Technometrics: Vol. 32, No. 4, pp. 448-450.