scispace - formally typeset
Search or ask a question

Showing papers in "Technometrics in 1991"


Journal ArticleDOI
TL;DR: In this article, categorical data analysis was used for categorical classification of categorical categorical datasets.Categorical Data Analysis, categorical Data analysis, CDA, CPDA, CDSA
Abstract: categorical data analysis , categorical data analysis , کتابخانه مرکزی دانشگاه علوم پزشکی تهران

10,964 citations


Journal ArticleDOI
Ali S. Hadi1
TL;DR: This book make understandable the cluster analysis is based notion of starsmodern treatment, which efficiently finds accurate clusters in data and discusses various types of study the user set explicitly but also proposes another.
Abstract: The Wiley-Interscience Paperback Series consists of selected books that have been made more accessible to consumers in an effort to increase In both the increasingly important and distribution we show how these methods. Our experiments demonstrate that together can deal with most applications technometrics. In an appropriate visualization technique is to these new. The well written and efficiently finds accurate clusters in data including. Of applied value for several preprocessing tasks discontinuity preserving smoothing feature clusters! However the model based notion of domain knowledge from real data repositories in data. Discusses various types of study the user set explicitly but also propose another. This book make understandable the cluster analysis is based notion of starsmodern treatment.

7,423 citations


Journal ArticleDOI
TL;DR: In this paper, an Introduction to Applied Geostatistics is presented, with a focus on the application of applied geometrics in the area of geostatistic applications.
Abstract: (1991). An Introduction to Applied Geostatistics. Technometrics: Vol. 33, No. 4, pp. 483-485.

4,911 citations


Journal ArticleDOI
TL;DR: In this article, the problem of designing computational experiments to determine which inputs have important effects on an output is considered, and experimental plans are composed of individually randomized one-factor-at-a-time designs, and data analysis is based on the resulting random sample of observed elementary effects.
Abstract: A computational model is a representation of some physical or other system of interest, first expressed mathematically and then implemented in the form of a computer program; it may be viewed as a function of inputs that, when evaluated, produces outputs. Motivation for this article comes from computational models that are deterministic, complicated enough to make classical mathematical analysis impractical and that have a moderate-to-large number of inputs. The problem of designing computational experiments to determine which inputs have important effects on an output is considered. The proposed experimental plans are composed of individually randomized one-factor-at-a-time designs, and data analysis is based on the resulting random sample of observed elementary effects, those changes in an output due solely to changes in a particular input. Advantages of this approach include a lack of reliance on assumptions of relative sparsity of important inputs, monotonicity of outputs with respect to inputs, or ad...

2,446 citations


Journal ArticleDOI
TL;DR: The role of statistical methods in this powerful technology as applied to speech recognition is addressed and a range of theoretical and practical issues that are as yet unsolved in terms of their importance and their effect on performance for different system implementations are discussed.
Abstract: The use of hidden Markov models for speech recognition has become predominant in the last several years, as evidenced by the number of published papers and talks at major speech conferences. The reasons this method has become so popular are the inherent statistical (mathematically precise) framework; the ease and availability of training algorithms for cstimating the parameters of the models from finite training sets of speech data; the flexibility of the resulting recognition system in which one can easily change the size, type, or architecture of the models to suit particular words, sounds, and so forth; and the ease of implementation of the overall recognition system. In this expository article, we address the role of statistical methods in this powerful technology as applied to speech recognition and discuss a range of theoretical and practical issues that are as yet unsolved in terms of their importance and their effect on performance for different system implementations.

1,480 citations


Journal ArticleDOI
TL;DR: In this article, Accelerated Testing: Statistical Models, Test Plans, and Data Analyses Technometrics: Vol 33, No 2, pp 236-238 and this article.
Abstract: (1991) Accelerated Testing: Statistical Models, Test Plans, and Data Analyses Technometrics: Vol 33, No 2, pp 236-238

1,414 citations


Journal ArticleDOI

811 citations


Journal ArticleDOI
TL;DR: In this article, the authors proposed Shewhart and cumulative sum (CUSUM) controls based on the vector Z of scaled residuals from the regression of each varia...
Abstract: When performing quality control in a situation in which measures are made of several possibly related variables, it is desirable to use methods that capitalize on the relationship between the variables to provide controls more sensitive than those that may be made on the variables individually. The most common methods of multivariate quality control that assess the vector of variables as a whole are those based on the Hotelling T 2 between the variables and the specification vector. Although T 2 is the optimal single-test statistic for a general multivariate shift in the mean vector, it is not optimal for more structured mean shifts-for example, shifts in only some of the variables. Measures based on quadratic forms (like T 2) also confound mean shifts with variance shifts and require quite extensive analysis following a signal to determine the nature of the shift. This article proposes Shewhart and cumulative sum (CUSUM) controls based on the vector Z of scaled residuals from the regression of each varia...

426 citations


Journal ArticleDOI
TL;DR: In this article, applied nonparametric statistics (2nd ed.) are used to estimate the probability of a node in a graph with respect to its neighbors in a given set.
Abstract: (1991). Applied Nonparametric Statistics (2nd ed.) Technometrics: Vol. 33, No. 3, pp. 364-365.

306 citations


Journal ArticleDOI
Eric R. Ziegel1

303 citations


Journal ArticleDOI
TL;DR: This article further develops and strengthens the response-model/combined-array approach and recommended examination of control-by-noise interaction plots suggested by the fitted-response model, which can reveal control-factor settings that dampen the effects of individual noise factors.
Abstract: Taguchi's robust-design technique, also known as parameter design, focuses on making product and process designs insensitive (i.e., robust) to hard-to-control variations. In some applications, however, his approach of modeling expected loss and the resulting “product array” experimental format leads to unnecessarily expensive experiments. As an alternative to Taguchi's “loss model/product array” formulation. Welch, Yu, Kang, and Sacks proposed combining control and noise factors in a single array, modeling the response itself rather than expected loss, and then approximating a prediction model for loss based on the fitted-response model. In this article, we further develop and strengthen this response-model/combined-array approach. We recommended examination of control-by-noise interaction plots suggested by the fitted-response model. These plots can reveal control-factor settings that dampen the effects of individual noise factors. We also show that the run savings from using combined arrays are due to t...

Journal ArticleDOI
Eric R. Ziegel1

Journal ArticleDOI
TL;DR: In this article, the authors compare the performance of two Gaussian intrinsic random-field models, and compare it with a Monte Carlo simulation study, by comparing their performance with the performance in the context of kriging.
Abstract: Predicting values of a spatially distributed variable, such as the concentration of a mineral throughout an ore body or the level of contamination around a toxic-waste dump, can be accomplished by a regression procedure known as kriging. Kriging and other types of statistical inference for spatially distributed variables are based on models of stochastic processes {Y t: t ∊ D} called random-field models. A commonly used class of random-field models are the intrinsic models, for which the mean is constant, and half of the variance of Yt , – Ys . is a function, called the semivariogram, of the difference t – s. The type of kriging corresponding to an intrinsic model is called ordinary kriging. The semivariogram, which typically is taken to depend on one or more unknown parameters, must be estimated prior to ordinary kriging. Various estimators of the semivariogram's parameters have been proposed. For two Gaussian intrinsic random-field models, we compare, by a Monte Carlo simulation study, the performance o...

Journal ArticleDOI
TL;DR: In this article, the authors developed a log-linear model for the Birnbaum-Saunders distribution, which can be used for accelerated life testing or to compare the median lives of several populations.
Abstract: The Birnbaum–Saunders distribution was derived to model times to failure for metals subject to fatigue. In this article, we formulate and develop a log-linear model for the Birnbaum–Saunders distribution. The model may be used for accelerated life testing or to compare the median lives of several populations. Methods of analyzing data for this log-linear model are discussed, with maximum likelihood and least squares methods being compared. It is found that, for commonly occurring conditions, the notorious intractability of the Birnbaum–Saunders distribution is not a serious problem because least squares and normal-theory procedures provide a reasonable alternative.

Journal ArticleDOI
TL;DR: Exponentially weighted moving average (EWMA) quality-monitoring schemes, referred to as omnibus EWMA's as discussed by the authors, are based on the exponentiation of the absolute value of the standardized sample mean of the observations.
Abstract: Exponentially weighted moving average (EWMA) quality-monitoring schemes capahlc of detecting changes in both location and spread, referred to as omnibus EWMA's. are proposed. Omnibus EWMA's are based on the exponentiation of the absolute value of the standardized sample mean of the observations. The process target mean and standard deviation are used for standardizing. Design procedures and considerations, including a fast initial response feature, are discussed. Average run lengths for exponents of .5 and 2 and various values of the weighting constant are presented. The proposed schemes are compared with each other and with a number of selected, well-known omnibus schemes and a proposed omnibus cumulative sum scheme. Schemes with different sample sizes are used, and off-target models include both static and dynamic alternatives. Comparisons are based on average number of observations to detection and expected loss due to poor quality. One of the omnibus EWMA schemes is shown to be best with regard to exp...



Journal ArticleDOI
TL;DR: In this paper, the authors used a log-linear Poisson model to estimate the expected number of warranty claims per unit in service as a function of the time in service and provided estimates that are adjusted for delays or lags corresponding to the time from the claim until it is entered into the data base used for analysis.
Abstract: This article discusses methods whereby reports of warranty claims can be used to estimate the expected number of warranty claims per unit in service as a function of the time in service. These methods provide estimates that are adjusted for delays or lags corresponding to the time from the claim until it is entered into the data base used for analysis. Forecasts of the number and cost of claims on the population of all units in service are also developed, along with standard errors for these forecasts. The methods are based on a log-linear Poisson model for numbers of warranty claims. Both the case of a known distribution of reporting lag and simultaneous estimation of that distribution are considered. The use of residuals for model checking, extensions to allow for extra-Poisson variation, and the estimation of warranty costs are also considered.

Journal ArticleDOI
TL;DR: In this paper, the authors compared strategies for the development of a linear regression model and the subsequent assessment of its predictive ability and found that the entire sample usually he used for model development and assessment.
Abstract: Strategies are compared for development of a linear regression model and the subsequent assessment of its predictive ability. Simulations were performed as a designed experiment over a range of data structures. Approaches using a forward selection of variables resulted in slightly smaller prediction errors and less biased estimators of predictive accuracy than all possible subsets selection but often did not improve on the full model. Random and balanced data splitting resulted in increased prediction errors and estimators with large mean squared error. To maximize predictive accuracy while retaining a reliable estimate of that accuracy, it is recommended that the entire sample usually he used for model development and assessment.

Journal ArticleDOI

Journal ArticleDOI
TL;DR: The Π method for estimating an underlying smooth function of M variables, (x l , …, xm ), using noisy data is based on approximating it by a sum of products of the form Π m φ m (x m ).
Abstract: The Π method for estimating an underlying smooth function of M variables, (x l , …, xm ), using noisy data is based on approximating it by a sum of products of the form Π m φ m (x m ). The problem is then reduced to estimating the univariate functions in the products. A convergent algorithm is described. The method keeps tight control on the degrees of freedom used in the fit. Many examples are given. The quality of fit given by the Π method is excellent. Usually, only a few products are enough to fit even fairly complicated functions. The coding into products of univariate functions allows a relatively understandable interpretation of the multivariate fit.


Journal ArticleDOI
TL;DR: In this paper, the authors present an approach for wind data analysis based on multivariate filter study and linear regression model for time series of directional data, which is used to estimate the geostrophic wind.
Abstract: I: Wind Data Analysis.- 1. Introduction.- 1.1. Surface Wind Observation.- 1.2. General Weather Pattern.- 1.3. Outline of this Monograph.- 2. The Initial Decomposition.- 2.1. General Background.- 2.2. Robust Filtering.- 2.3. Univariate Filter Study.- 2.4. Multivariate Filter Study.- 2.5. Application to Wind Series.- 2.6. Appendix: Mathematical Details.- 3. The Geostrophic Component.- 3.1. The Geostrophic Wind.- 3.2. Estimation of the Geostrophic Wind.- 3.3. Comparison with the Geostrophic Component.- 3.4. Synoptic States.- 3.5. Appendix: Derivation of the Geostrophic Wind Equation.- 4. The Land and Sea Breeze Cycle.- 4.1. The Nature of the Circulation.- 4.2. Statistical Approach.- 4.3. Land and Sea Breeze Pattern.- 5. Short-Term Events.- 5.1. Meteorological Patterns.- 5.2. Wind Classification.- 5.3. Characteristics of Short-Term Events.- 5.4. Appendix: Removal of Storms.- II: Time Series of Directional Data.- 6. Time Series Models for Directional Data.- 6.1. Circular Variables.- 6.2. The von Mises Process.- 6.3. The Wrapped Autoregressive Process.- 7. Measures of Angular Association.- 7.1. Desirable Properties.- 7.2. Bivariate Angular Distributions.- 7.3. Review of Measures of Association.- 7.4. A Proposal for Vector Valued Time Series.- 7.5. Appendix: Non-von Mises Marginals.- 8. Comparison of Different Measures of Association.- 8.1. Independent Bivariate Directional Data.- 8.2. Time Series of Directional Data.- 9. Inference from the Wrapped Autoregressive Process.- 9.1. Introduction.- 9.2. Equating Theoretical and Empirical Circular Variance (EQ).- 9.3. Corrected EQ-Estimation (EC).- 9.4. Bayes Estimation (BA).- 9.5. Maximum Likelihood Estimation (ML).- 9.6. Characteristic Function Estimation (CF).- 9.7. Numerical Comparison of Estimators.- 10. Application to Series of Residual Wind Directions.- 11. Conclusions and Summary of Results.- List of Symbols.

Journal ArticleDOI
TL;DR: The analysis of time series: An Introduction (4th ed.) Technometrics: Vol. 33, No. 3, pp. 363-364 as mentioned in this paper is a well-known topic in time series analysis.
Abstract: (1991). The Analysis of Time Series: An Introduction (4th ed.) Technometrics: Vol. 33, No. 3, pp. 363-364.

Journal ArticleDOI
TL;DR: In this article, Genichi Taguchi designed experiments in research and development and four case studies, Karen Kafadar Biotechnology Experimental Design, Chistopher J. Nachisheim, Paul E. Johnson, Kenneth D. Kotnour, Ruth K. Meyer, and Imran A. Zualkernan The Effect of Ozone on Asthmatics and Normals: An Unbalanced ANOVA Example, Thomas J. Lorenzen Sensitivity of an Air Pollution and Health Study to the Choice of a Mortality Index, Diane I. Gibbons and
Abstract: "Applications Experimental Design for Product Design, Genichi Taguchi Designing Experiments in Research and Development: Four Case Studies, Karen Kafadar Biotechnology Experimental Design, Perry D. Haaland Expert Systems for the Design of Experiments, Chistopher J. Nachisheim, Paul E. Johnson, Kenneth D. Kotnour, Ruth K. Meyer, and Imran A. Zualkernan The Effect of Ozone on Asthmatics and Normals: An Unbalanced ANOVA Example, Thomas J. Lorenzen Sensitivity of an Air Pollution and Health Study to the Choice of a Mortality Index, Diane I. Gibbons and Gary C. McDonald Methods Mixture Experiments, John A. Cornell Response Surface Designs and the Prediction Variance Function, Raymond H. Myers The Analysis of Multiresponse Experiments: A Review, AndrE I. Khuri The Role of Experimentation in Quality Engineering: A Review of Taguchi's Contributions, Vijay N. Nair and Anne C. Shoemaker SEL: A Search Method Based on Orthogonal Arrays, C. F. Jeff Wu, S. S. Mao, and F. S. Ma Modern Factorial Design Theory for Experimenters and Statisticians, Jagdish N. Srivastava New Properties of Orthogonal Arrays and Their Statistical Applications, A. Sam Hedayat Construction of Run Orders of Factorial Designs, Ching-Shui Cheng Methods for Constructing Trend-Resistant Run Orders of 2-Level Factorial Experiments, Mike Jacroux Measuring Dispersion Effects of Factors in Factorial Experiments, Subir Ghosh and Eric S. Lagergren Designing Factorial Experiments: A Survey of the Use of Generalized Cyclic Designs, Angela M. Dean Crossover Designs in Industry, Damaraju Raghavarao "


Journal ArticleDOI
TL;DR: In this article, a simple, flexible spatial-modeling approach to the analysis of industrial experiments (e.g., wafer fabrication) can yield more efficient estimators of the treatment contrasts than the classical approach.
Abstract: Classical experimental design is based on the three concepts of randomization, blocking, and replication. Randomization endeavors to neutralize the effects of (spatial) correlation and yields valid tests for the hypothesis of equal treatment effects. More recently, attempts have been made to use the spatial location of treatments to improve the efficiencies of estimators of treatment contrasts. In this article, we show that a simple, flexible spatial-modeling approach to the analysis of industrial experiments (e.g., wafer fabrication) can yield more efficient estimators of the treatment contrasts than the classical approach. We base the analysis on empirical generalized least squares estimation, in which the spatial-dependence parameters are estimated from resistantly detrended response data.


Journal ArticleDOI
TL;DR: In this paper, a multivariate survival distribution derived from an inverse Gaussian mixture of exponential distributions is considered, and a general formula for joint moments and the monotonicity properties of hazard rates are described.
Abstract: We consider a multivariate survival distribution derived from an inverse Gaussian mixture of exponential distributions. The variables of this multivariate distribution are shown to exhibit total positive dependence of order 2. A general formula for joint moments and the monotonicity properties of hazard rates are described. Inference methods—including derivation of a posterior distribution for the unknown exponential hazard rate, maximum likelihood estimation of the mixture distribution parameters, and derivation of a posterior predictive distribution for a new observation—are developed. Procedures for assessing model adequacy are also presented. The computational requirements of the methods are modest, and censored data are handled with ease. The inference methods and model assessment procedures are illustrated with several case examples, one of which exploits the connection between this multivariate distribution and the inverse Gaussian mixture of Poisson distributions.