scispace - formally typeset
Search or ask a question

Showing papers in "Technometrics in 2003"


Journal ArticleDOI
TL;DR: Chapter 11 includes more case studies in other areas, ranging from manufacturing to marketing research, and a detailed comparison with other diagnostic tools, such as logistic regression and tree-based methods.
Abstract: Chapter 11 includes more case studies in other areas, ranging from manufacturing to marketing research. Chapter 12 concludes the book with some commentary about the scientiŽ c contributions of MTS. The Taguchi method for design of experiment has generated considerable controversy in the statistical community over the past few decades. The MTS/MTGS method seems to lead another source of discussions on the methodology it advocates (Montgomery 2003). As pointed out by Woodall et al. (2003), the MTS/MTGS methods are considered ad hoc in the sense that they have not been developed using any underlying statistical theory. Because the “normal” and “abnormal” groups form the basis of the theory, some sampling restrictions are fundamental to the applications. First, it is essential that the “normal” sample be uniform, unbiased, and/or complete so that a reliable measurement scale is obtained. Second, the selection of “abnormal” samples is crucial to the success of dimensionality reduction when OAs are used. For example, if each abnormal item is really unique in the medical example, then it is unclear how the statistical distance MD can be guaranteed to give a consistent diagnosis measure of severity on a continuous scale when the larger-the-better type S/N ratio is used. Multivariate diagnosis is not new to Technometrics readers and is now becoming increasingly more popular in statistical analysis and data mining for knowledge discovery. As a promising alternative that assumes no underlying data model, The Mahalanobis–Taguchi Strategy does not provide sufŽ cient evidence of gains achieved by using the proposed method over existing tools. Readers may be very interested in a detailed comparison with other diagnostic tools, such as logistic regression and tree-based methods. Overall, although the idea of MTS/MTGS is intriguing, this book would be more valuable had it been written in a rigorous fashion as a technical reference. There is some lack of precision even in several mathematical notations. Perhaps a follow-up with additional theoretical justiŽ cation and careful case studies would answer some of the lingering questions.

11,507 citations


Journal ArticleDOI
TL;DR: Generalized Estimating Equations is a good introductory book for analyzing continuous and discrete correlated data using GEE methods and provides good guidance for analyzing correlated data in biomedical studies and survey studies.
Abstract: (2003). Statistical Analysis With Missing Data. Technometrics: Vol. 45, No. 4, pp. 364-365.

6,960 citations


Journal ArticleDOI
TL;DR: This book complements the other references well, and merits a place on the bookshelf of anyone concerned with the analysis of lifetime data from any Ž eld.
Abstract: (2003). The Statistical Analysis of Failure Time Data. Technometrics: Vol. 45, No. 3, pp. 265-266.

4,600 citations


Journal ArticleDOI
TL;DR: This book describes and illustrates how to compute a simple “naive” variance estimate and conŽ dence intervals that would be correct under the assumption of an underlying nonhomogeneous Poisson process model.
Abstract: (2003). Statistical Models and Methods for Lifetime Data. Technometrics: Vol. 45, No. 3, pp. 264-265.

2,583 citations


Journal ArticleDOI
TL;DR: Although Testing Statistical Hypotheses of Equivalence has some weaknesses, it is a useful reference for those interested in the question of equivalence testing, particularly in biological applications.
Abstract: The writing is not uniformly polished and is scattered with long, awkward sentences that require some effort to unravel. I wonder if this is the result of infelicitous translation from the original German version (Wellek 1994). There are also numerous small typographical errors. More careful editing could have solved these problems before publication. There are no exercises, and so I would hesitate to use the book as a text (although it should be noted that this is not one of the author’s stated aims). Although Testing Statistical Hypotheses of Equivalence has some weaknesses, it is a useful reference for those interested in the question of equivalence testing, particularly in biological applications.

2,485 citations


Journal ArticleDOI
TL;DR: In this article, an Introduction to Data Analysis Using S-PLUS is presented. But the authors do not discuss the use of statistical computing for data analysis in the context of data mining.
Abstract: (2003). Statistical Computing: An Introduction to Data Analysis Using S-PLUS. Technometrics: Vol. 45, No. 4, pp. 369-369.

1,118 citations


Journal ArticleDOI

1,020 citations


Journal ArticleDOI
TL;DR: Applied Regression Analysis Bibliography Update 2000–2001,” Communications in Statistics: Theory and Methods, 2051– 2075.
Abstract: Christensen, R. (2002), Plane Answers to Complex Questions: The Theory of Linear Models (3rd ed.), New York: Springer-Verlag. Crocker, D. C. (1980), Review of Linear Regression Analysis, by G. A. F. Seber, Technometrics, 22, 130. Datta, B. N. (1995), Numerical Linear Algebra and Applications, PaciŽ c Grove, CA: Brooks/Cole. Draper, N. R. (2002), “Applied Regression Analysis Bibliography Update 2000–2001,” Communications in Statistics: Theory and Methods, 2051– 2075. Golub, G. H., and Van Loan, C. F. (1996), Matrix Computations (3rd ed.), Baltimore, MD: Johns Hopkins University Press. Graybill, F. A. (2000), Theory and Application of the Linear Model, PaciŽ c Grove, CA: Brooks/Cole. Hocking, R. R. (2003), Methods and Applications of Linear Models: Regression and the Analysis of Variance (2nd ed.), New York: Wiley. Porat, B. (1993), Digital Processing of Random Signals, Englewood Cliffs, NJ: Prentice-Hall. Ravishanker, N., and Dey, D. K. (2002), A First Course in Linear Model Theory, Boca Raton, FL: Chapman and Hall/CRC. White, H. (1984), Asymptotic Theory for Econometricians, Orlando, FL: Academic Press.

862 citations


Journal ArticleDOI
TL;DR: The ozone study introduces the INTCK function to simplify time and date processing and the MOD function to create group indexes, and the library study uses the _TYPE_ variable to conveniently generate categories for PROC CHART and introduces the attractive tabular output format of PROC TABULATE.
Abstract: “look ahead” from the current observation to the next one, without actually processing it, by using multiple SET statements and the FIRSTOBS option. Unfortunately, the clinical study relegates the PROC FORMAT statements to the appendix. Because the author does not stress that this information is pertinent to the discussion, it might be overlooked. By altering the previous convention of putting everything needed within the chapter, the reader who unwittingly bypasses the appendix references could easily become confused. The ozone study introduces the INTCK function to simplify time and date processing and the MOD function to create group indexes. The library study uses the _TYPE_ variable to conveniently generate categories for PROC CHART and introduces the attractive tabular output format of PROC TABULATE. Chapter 12 “Useful Macros,” the crown jewel of the book, provides a summation of all of the functions, detailing their execution as compact macros code. It illustrates automatic differencing between successive observations and uses the %SYSFUNC and TRANWRD functions to automate variable preŽ xing and sufŽ xing. I highly recommend this book to its target audience of SAS programmers and data analysts. It speaks to all levels of programming ability.

631 citations


Journal ArticleDOI
TL;DR: This book is a great book for a Ž rst graduate course in multivariate analysis, because it covers the standard topics in “classical” normal theory approach to multivariateAnalysis.
Abstract: Chapter 17, “Bootstrapping,” provides an overview of the nonparametric bootstrap method as a way to obtain precision and the distribution (and thus the signiŽ cance point of a test statistic), under the assumptions of independence of observation vectors and continuity of cumulative distribution function. Bootstrapping from residuals, conŽ dence intervals for parametric functions, bootstrapping in the multiple regression model, and testing about the variance are discussed for the univariate case. Testing for the mean vector, testing for equality of two mean vectors, bootstrapping in multivariate regression, testing that the covariance is a speciŽ ed matrix, and testing the equality of two covariance matrices are discussed for the multivariate case. Chapter 18, “Imputing Missing Data,” proposes single stochastic imputation for missing data and use of the “completed” dataset for inference. Cases of the multiple regression model and multivariate data are considered. For multivariate data, imputed values are obtained by Ž rst obtaining the best linear predictors of the missing observations, then adding random errors to them to obtain imputed values. Estimates of the mean vector and covariance matrix are obtained from the “completed” dataset, which has imputed values in place of missing observations. Bootstrap methods are given to obtain the estimate of the covariances of these estimators. SigniŽ cance levels from the bootstrap distribution are suggested for improving inference about the mean. The author does not discuss multiple imputation, which has the advantages of single imputation, attains large-sample optimality of ML estimation, and is robust to model misspeciŽ cation (Rubin 2002). I would have liked to see (a) a more detailed discussion on transformations to achieve normality for multivariate data, and more on graphical methods for assessing multivariate normality and detecting multivariate outliers; (b) in-depth discussion on the interpretation of test results other than signiŽ cance levels, including adjustment of signiŽ cance levels in the case of multiple tests (Hsu 1996); and (c) comparisons on the performance of proposed tests and methodologies with other existing ones. In conclusion, this is a great book for a Ž rst graduate course in multivariate analysis, because it covers the standard topics in “classical” normal theory approach to multivariate analysis. The “customized” SAS codes provide nice support for the implementation of the suggested methods and make this book a great reference for practitioners. This is a book with a data analytic approach and one of the best references for its level.

612 citations


Journal ArticleDOI
TL;DR: The author does an admirable job of explaining the differences between Bayesian probability and the frequentist notion of probability, showing that, philosophically, only the Bayesian makes sense.
Abstract: (2003). Comparison Methods for Stochastic Models and Risks. Technometrics: Vol. 45, No. 4, pp. 370-371.

Journal ArticleDOI
TL;DR: This book ties together standard mixed model results with advances that have been made in the last decade into the original context of earlier results, and collects these advances into a single coherent package.
Abstract: (2003). Statistical Methods for the Analysis of Repeated Measurements. Technometrics: Vol. 45, No. 1, pp. 99-100.

Journal ArticleDOI
TL;DR: Practical Reliability Engineering provides a nice overview for a student of reliability (or an engineer transferred into it) who wants to visit the entire waterfront and includes practical, numerical examples that illustrate some of the methods and problems encountered.
Abstract: This book was written to provide an introduction to and an overview of reliability engineering and management for students and practicing engineers. In its 500C pages, the book touches most aspects of reliability engineering. The book comprises 15 chapters. Chapter 1 gives basic deŽ nitions and concepts. Chapter 2 quickly overviews most of the basic statistical methods used in reliability. Chapter 3 discusses probability plotting and gives many detailed examples. Chapter 4 covers load-strength interference. Chapter 5 quickly reviews parametric, nonparametric, and Taguchi methods of experimental design. Chapter 6 covers reliability prediction and modeling methods, including Markov, simulation, availability, redundancy, fault, and event trees. Chapter 7 reviews reliability design, including FMECA. Chapter 8 covers reliability of mechanical components and systems, including stress, strength, fatigue, fracture, and wear. Chapter 9 reviews electrical systems; Chapter 10, software reliability; Chapter 11, reliability testing methods; Chapter 12, analysis of reliability data, including accelerated testing and reliability growth; and Chapter 13, methods that increase reliability in manufacturing, including process capability, quality control charts, and acceptance sampling. Chapter 14 presents an extensive discussion on maintainability. Finally, Chapter 15 discusses management issues in reliability. Some statistical tables are included at the end of the book. Most chapters include practical, numerical examples that brie y illustrate some of the methods and problems encountered, along with tables of useful formulas and of work forms. At the end of each chapter is a short bibliography with suggestions for further reading, as well as some problems and questions (whose solutions are given in a separate Instructor’s Manual). As a reliability statistician and college instructor, I have some comments and observations. First, the book covers so much that it is necessarily thin in several areas. The statistics chapter (Chap. 2, “Reliability Mathematics”) is an example of this. O’Connor discusses t tests, hypothesis testing, nonparametrics, and related topics in only a few paragraphs and skips other aspects (e.g., test size, power). However, this book is not about statistics, but about reliability engineering (which uses lots of statistics). And, to be fair, this chapter may serve the initiated as a quick refresher. On the other hand, O’Connor does an excellent job on the plotting chapter (Chap. 3). If he had done likewise with all of the other subjects he covers, then he probably would have ended with a book of several volumes. This seems to be a catch-22 situation. Some of the most useful features of in the book are the long digressions on reliability topics. Reading these is like having the opportunity to talk with an “old hand” and listen to his experiences. This is very valuable. Finally, to make this book usable as a textbook, the author may want to include the solutions for some (say, the odd-numbered) exercises. Relegating all of the solutions to the Instructor’s Manual is a problem in a textbook and does not help the student. My overall assessment is that Practical Reliability Engineering provides a nice overview for a student of reliability (or an engineer transferred into it) who wants to visit the entire waterfront. It also serves as a reference where the reader can Ž nd most formulas and short, worked-out illustrative examples (which he or she probably has to Ž ll in with personal experience, many times).

Journal ArticleDOI
TL;DR: Comparisons of the standard and worst-case average run length profiles of the new scheme with those of different control charts show that AEWMA schemes offer a more balanced protection against shifts of different sizes.
Abstract: Lucas and Saccucci showed that exponentially weighted moving average (EWMA) control charts can be designed to quickly detect either small or large shifts in the mean of a sequence of independent observations. But a single EWMA chart cannot perform well for small and large shifts simultaneously. Furthermore, in the worst-case situation, this scheme requires a few observations to overcome its initial inertia. The main goal of this article is to suggest an adaptive EWMA (AEWMA) chart that weights the past observations of the monitored process using a suitable function of the current “error.” The resulting scheme can be viewed as a smooth combination of a Shewhart chart and an EWMA chart. A design procedure for the new control schemes is suggested. Comparisons of the standard and worst-case average run length profiles of the new scheme with those of different control charts show that AEWMA schemes offer a more balanced protection against shifts of different sizes.

Journal ArticleDOI
TL;DR: After Part I, the text becomes mainly an application guide for SIMCA-P and MODDE, and the basic message is that when one has a multivariate problem, do either a PCA analysis or a PLS analysis, or both.
Abstract: This text is essentially an application of principal component analysis (PCA) and partial least squares (PLS) to multivariate (X0X matrix full rank) and megavariate (X0X not full rank) problems in the physical sciences. The software used is SIMCA-P and MODDE from Umetrics. Also included is a CD with all of the data Ž les used as examples in the text. Chapter 0 is a Preface that includes no mention of the software/version used. I guess because SIMCA-P and MODDE are mentioned as registered trademarks on the cover page, the reader should know this fact. However, I had SIMCA-P, version 8 on my computer, but could not read any of the data Ž les until I upgraded to version 9. Part I comprises four introductory chapters on the concepts and principles of projections along with discussions and derivations of PCA and PLS. Graphs are used extensively throughout the discussion, and overall this is an excellent introduction to PCA and PLS. In addition, examples are used to explain the output/terminology from SIMCA-P. Part II, Chapters 5–8, covers the following applications: multivariate characterization (mixture of qualitative and quantitative variables, with the goal of quantifying discrete changes in the qualitative variables), multivariate calibration, multivariate process modeling and monitoring, multivariate classiŽ cation, and discriminant analysis). Part III, Chapters 9–11, discusses preprocessing of the data with respect to transformation, scaling, and signal correction and compression. Part IV, Chapters 12–15, discusses multivariate statistical process control charts for batch and continuous data, multivariate time series, and design of experiments. Part V, Chapters 16–20, covers QSAR modeling, combinatorial chemistry, bioinformatics, and cheminformatics. Finally, Part VI, Chapters 21– 23, discusses nonlinear PLS modeling, analysis of preference (sensory) data, and validation. I liked a number of features of this book, including (1) illustration of the concepts using real data, (2) clear and straightforward discussion of the different application types, (3) thorough discussion/explanation of the SIMCA-P and MODDE output, and (4) the broad range of application topics. What I disliked about the book is that after Part I, the text becomes mainly an application guide for SIMCA-P and MODDE. There is no discussion on alternative methods of analysis or, for that matter, when PCA and PLS might not be appropriate. The basic message is that when one has a multivariate problem, do either a PCA analysis or a PLS analysis, or both. The chapters became very predictable—a few pages of discussion of the application, presentation of one or more datasets, and analysis using SIMCA-P.

Journal ArticleDOI
TL;DR: This book describes the background of the case studies very well and gives details on the statistical methods applied, and more than half of the papers apply different types of hierarchical regression models, there are still a variety of other Bayesian models.
Abstract: Most papers describe the background of the case studies very well and give details on the statistical methods applied. Although more than half of the papers apply different types of hierarchical regression models, there are still a variety of other Bayesian models. As expected, most of these papers use Markov Chain Monte Carlo methods in obtaining a posterior. However, I was a little surprised that many of them implement their algorithms using BUGS, a software package designed to simplify the Bayesian data analysis. In general, I found many of the papers collected in this book interesting. Technometrics readers who are interested in applying Bayesian analysis in real case studies may Ž nd this book useful. One might consider using this book as supplementary material in graduate-level seminar courses on Bayesian data analysis. However, one serious deŽ ciency for this purpose is that only a few papers provide complete datasets.

Journal ArticleDOI
TL;DR: Generalized Estimating Equations is a good introductory book for analyzing continuous and discrete correlated data using GEE methods and provides good guidance for analyzing correlated data in biomedical studies and survey studies.
Abstract: authors brie y review various methods and refer readers to works such as Little (1995) for details. The analyses presented are based on certain assumptions, such that the available GEE software can be applied. Chapter 4 gives a thorough discussion on model selection and testing and graphical methods for residual diagnostics. Overall, Generalized Estimating Equations is a good introductory book for analyzing continuous and discrete correlated data using GEE methods. The authors discuss the differences among the four commercial software programs and provide suggestions and cautions for users. This book is easy to read, and it assumes that the reader has some background in GLM. Many examples are drawn from biomedical studies and survey studies, and so it provides good guidance for analyzing correlated data in these and other areas.

Journal ArticleDOI
TL;DR: This 4th edition (4E) retains the general structure of the 3rd edition (3E), but covers a somewhat broader range of topics, with more detailed examples and updated software features.
Abstract: This 4th edition (4E) retains the general structure of the 3rd edition (3E), but covers a somewhat broader range of topics, with more detailed examples and updated software features. The greatest enhancement is in terms of the discussion and examples of linear and mixed-model methods throughout the book. Examples in most chapters focus on the GLM and/or the MIXED procedures, how they compare, how they can be used to compliment each other and the limits of each. Other enhancements include the use of version 8 and ODS, discussions and examples on Ž xed versus random block effects, analysis of multilocation data, a new chapter on unbalanced data analysis, and a new chapter on generalized linear models and PROC GENMOD. Additional graphics have been included in certain sections, helping a great deal. The SAS Books by Users’ companion website (http://www.sas.com/service/doc/bbu/companion_site/56655.html) contains the datasets and SAS code (release 8.2) to perform the analyses described throughout the book. An introductory chapter describes the scope of the book and chapter summaries where updates and enhancements have occurred. Chapter 2, “Regression,” reviews the basics of linear models in terms of simple linear and multiple regression, similar to the 3E. Discussion focuses on testing hypotheses and estimating linear combinations of parameters. Type I, II, and III sums of squares are introduced along with the concept of partitioning sums of squares and Ž tting full versus reduced models. The REG procedure is used to demonstrate the concepts, and GLM is introduced toward the end of the chapter. The contents of Chapter 3, “Analysis of Variance for Balanced Data,” are similar to the 3E, covering the basic designs of one-way ANOVA, randomized block designs with Ž xed blocks, two-way factorial designs, and a latin square design. However, emphasis is placed on the GLM rather than on the ANOVA procedure in the examples. The MIXED procedure is also introduced here. The chapter contains a good review of multiple comparison of means methods, and an excellent discussion on model parameterization and how to go from a “means” model to an “effects” model. There are some minor errors in the equations in the “Simple Effect Comparisons” section. Chapter 4, “Analyzing Data With Random Effects,” focuses on the issues pertaining to mixed-model inferences. A good introduction section has been added that discusses Ž xed versus random effects. The chapter focuses on the MIXED and GLM procedures and how they compare, rather than on the NESTED and GLM procedures of the 3E. More discussion on calculating standard errors from ESTIMATE and LSMEANS statements, and on the differences between GLM and MIXED, has been added. Also, discussion and examples on the likelihood ratio test and the Wald test options in MIXED are included. There is an additional section that explores blocked designs with random blocks and how the results/interpretations compare to the Ž xed-block designs discussed in Chapter 3. The comparisons made between GLM and MIXED are excellent in terms of why the outputs from the two procedures differ, what information one can and cannot get from each procedure, and whether the information obtained from each is correct (i.e., what are the limits for GLM). There are some minor typos throughout the chapter in the reference tables and some of the equations. Chapter 5, “Unbalanced Data Analysis: Basic Methods,” is a new chapter to the 4E. The chapter discusses the issues that arise with unbalanced data and missing or empty cells. The examples and explanations are clear in terms of what to be careful of in each case and when to use either GLM or MIXED over the other. An excellent discussion is included about how estimability is treated under both GLM and MIXED and how they compare when data have missing cells. Also, included is a good discussion on the differences in the four types of sums of squares and where each would or would not be applicable. Chapter 6, “Understanding Linear Models Concepts,” is similar in content to the previous edition, however, a section at the end has been added introducing generalized least squares and the methodology used by MIXED. Chapter 6 is the most demanding chapter in the book. It goes into more depth than previous chapters about how GLM and MIXED parameterize models and how this is effected by Ž xed versus random effects. How to generate the estimable functions of an analysis, and how to understand the output that GLM provides, are also discussed. Again, there are minor typos in some of the equations. Chapter 7, “Analysis of Covariance,” provides a good overview of the different models and analyses involved in ANCOVA and has been expanded somewhat from the 3E. Additional discussion on adjusted and unadjusted means and how to estimate them has been added, with graphics. Also, more details are included on the analysis under an unequal slopes model. The example with multiple error terms from the previous edition has been updated with discussion and analysis to cover both GLM and MIXED output results. An additional section at the end of the chapter discusses orthogonal polynomials and how they relate to analysis of covariance. The example used very effectively demonstrates the link between traditional ANOVA methods for multilevel factorial experiments and ANCOVA methods. Chapter 8, “Repeated-Measures Analysis,” underwent major revisions due to the inclusion of MIXED. A substantial section on mixed-model analysis of repeated measures has been added. The chapter begins with a good discussion of the different analysis methods used for repeated measures, in which mixedmodel methods have been included. It also introduces the covariance structure of repeated measures and how it differs from traditional split-plot models. A large part of the chapter focuses on the different covariance models that are common for repeated measures and how to Ž t them using PROC MIXED. There is a good review on deŽ ning the basic error covariance structure of a model, covering the different common structures and how they differ. Covariance structures from the simplest to the most complex are discussed, along with examples showing how to Ž t these structures using MIXED. The graphics are most helpful. Details on determining which covariance structure best Ž ts one’s data using the diagnostic tools available in MIXED is also included. Chapter 9, “Multivariate Linear Models,” gives a brief overview and introduction to MANOVA. It is similar to the chapter in the 3E; however, emphasis is placed on the GLM procedure rather than on the ANOVA procedure. Chapter 10, “Generalized Linear Models,” is another chapter new to the 4E. This chapter is an excellent overview of the basic differences between “standard” linear models and generalized linear models. The chapter gives a good overview of the capabilities of the GENMOD procedure in Ž tting a variety of models. It begins with a binary response variable example, reviewing both the logit and probit regression models. Model goodness of Ž t and how to assess Ž t are addressed. Applications of using the inverse link and delta rule are shown. The problem of overdispersion and correction for it is addressed with a third example using count data. The last example in this chapter is a repeated-measures scenario and is used to introduce generalized estimating equations and how to specify the covariance structure in GENMOD and how it compares to MIXED. Chapter 11, “Examples of Special Applications,” has been updated from the 3E to include additional examples, with discussion focusing on GLM and MIXED procedures. A fractional factorial example has been added; the balanced incomplete-block design example is analyzed in both GLM and MIXED. The crossover design has been enhanced with an example from Cochran and Cox (1957) and is analyzed with both GLM and MIXED and compared. A large section on the analysis of multilocation data has been added at the end of this chapter. One example dataset is used here to demonstrate the current issues involved with this type of dataset and several alternative analyses, using both linear and mixed-model methods. Unlike for the previous chapters, however, there does not appear to be any of the SAS code recreated on the website for this chapter. Similar to the 3E, the 4E covers a very broad range of topics, which is the authors’ intent. It is an excellent reference for speciŽ c linear models procedures available in SAS under a broad range of scenarios. As mentioned previously, the companion website gives the data and SAS code used in the examples; however, it must be noted that some minor changes in the code are required for it to run without errors (speciŽ cally, Chaps. 6, 7, 8, and 10). SAS for Linear Models is an excellent resource for the intermediate to advanced SAS statistical user with a basic understanding of linear models. It would also serve well as a supplemental textbook for an applied linear models course where SAS is the software package for analysis. The authors have done an excellent job incorporating the latest analysis methods and latest software

Journal ArticleDOI
TL;DR: State-of-the-art survey chapters by leading researchers covering geometric algebra-a powerful mathematical tool for solving problems in computer science, engineering, physics, and mathematics.
Abstract: State-of-the-art survey chapters by leading researchers covering geometric algebra-a powerful mathematical tool for solving problems in computer science, engineering, physics, and mathematics. Focus is on interdisciplinary applications and techniques. Self-contained assuming only a knowledge of li... Accurate and efficient computer algorithms for factoring matrices, solving linear systems of equations, and extracting eigenvalues and eigenvectors. Regardless of the software system used, the book describes and gives examples of the use of modern computer software for numerical linear algebra. It b...

Journal ArticleDOI
TL;DR: The Mahalanobis-Taguchi system (MTS) as mentioned in this paper is a relatively new collection of methods proposed for diagnosis and forecasting using multivariate data, which is used to measure the level of abnormality of abnormal items compared to a group of normal items.
Abstract: The Mahalanobis–Taguchi system (MTS) is a relatively new collection of methods proposed for diagnosis and forecasting using multivariate data. The primary proponent of the MTS is Genichi Taguchi, who is very well known for his controversial ideas and methods for using designed experiments. The MTS results in a Mahalanobis distance scale used to measure the level of abnormality of “abnormal” items compared to a group of “normal” items. First, it must be demonstrated that a Mahalanobis distance measure based on all available variables on the items is able to separate the abnormal items from the normal items. If this is the case, then orthogonal arrays and signal-to-noise ratios are used to select an “optimal” combination of variables for calculating the Mahalanobis distances. Optimality is defined in terms of the ability of the Mahalanobis distance scale to match a prespecified or estimated scale that measures the severity of the abnormalities. In this expository article, we review the methods of the MTS an...

Journal ArticleDOI
TL;DR: The next topic covered (Chap. 7) is discrete-state, continuous-time Markov processes, which is predominantly on Poisson processes and birth and death processes.
Abstract: (2003). Testing Statistical Hypotheses of Equivalence. Technometrics: Vol. 45, No. 3, pp. 271-272.

Journal ArticleDOI
TL;DR: The trend-renewal process (TRP) is a time-transformed renewal process having both the ordinary renewal process and the nonhomogeneous Poisson process as special cases.
Abstract: The most commonly used models for the failure process of a repairable system are nonhomogeneous Poisson processes, corresponding to minimal repairs, and renewal processes, corresponding to perfect repairs. This article introduces and studies a more general model for recurrent events, the trend-renewal process (TRP). The TRP is a time-transformed renewal process having both the ordinary renewal process and the nonhomogeneous Poisson process as special cases. Parametric inference in the TRP model is studied, with emphasis on the case in which several systems are observed in the presence of a possible unobserved heterogeneity between systems.

Journal ArticleDOI
TL;DR: This article examines the diagnosability of the process faults in a multistage manufacturing process using a linear mixed-effects model and proposes a minimal diagnosable class to expose the “aliasing” structure among process fault structure in a partiallydiagnosable system.
Abstract: Automatic in-process data collection techniques have been widely used in complicated manufacturing processes in recent years. The huge amounts of product measurement data have created great opportunities for process monitoring and diagnosis. Given such product quality measurements, this article examines the diagnosability of the process faults in a multistage manufacturing process using a linear mixed-effects model. Fault diagnosability is defined in a general way that does not depend on specific diagnosis algorithms. The concept of a minimal diagnosable class is proposed to expose the “aliasing” structure among process faults in a partially diagnosable system. The algorithms and procedures needed to obtain the minimal diagnosable class and to evaluate the system-level diagnosability are presented. The methodology, which can be used for any general linear input–output system, is illustrated using a panel assembly process and an engine head machining process.


Journal ArticleDOI
TL;DR: Bayesian Networks and Decision Graphs provides broad topic coverage, simple, familiar problems, and does not distract the reader by the messy details that always accompany real problems.
Abstract: simple, familiar problems, the book does not distract the reader by the messy details that always accompany real problems. Overall I found this book to be an excellent introduction to the topic. It is well written, provides broad topic coverage, and is quite accessible to the nonexpert. I would have liked to have seen the summary sections continued into the second part, and to have seen a bit more carry-over of the examples from the Ž rst part. However, with that said, I think Bayesian Networks and Decision Graphs would make a Ž ne text for an introductory class in Bayesian networks or a useful reference for anyone interested in learning about the Ž eld.

Journal ArticleDOI
TL;DR: Given the somewhat questionable organization of the book’s chapters, as well as the lack of smooth transitions between sections of this book, I recommend it for the experienced reader rather than the novice, even though a novice might find much of the advice valuable.
Abstract: Despite the fact that I have some misgivings about the ad hoc nature of the chapter deŽ nitions, I found this book enjoyable and deem it worthwhile reading for practitioners, consultants, and applied academic statisticians. I discovered the real value of this book in the Illustration portion of each section. The vignettes are interesting reading that reinforces the Rule of Thumb mentioned in the previous portion of that same section. The rules of thumb themselves represent sets of guidelines and common sense for practice, and the vignettes sometimes point out the humorous, though serious, consequences that can result when the use of statistics runs amok. Given the somewhat questionable organization of the book’s chapters, as well as the lack of smooth transitions between sections of this book, I recommend it for the experienced reader rather than the novice, even though a novice might Ž nd much of the advice valuable. In fact, I liken the vignettes to a higher-level version of the examples found in How to Lie With Statistics (Huff 1954). I further commend van Belle for this undertaking because, despite my concerns, I cannot image a better organization than that presented.

Journal ArticleDOI
TL;DR: In this paper, the plane answers to complex questions: Theory of Linear Models (PLM) were used to answer the complex questions of the plane answering problem in the plane-answer problem.
Abstract: (2003). Plane Answers to Complex Questions: Theory of Linear Models. Technometrics: Vol. 45, No. 2, pp. 174-175.

Journal ArticleDOI
TL;DR: It is shown that two-level factorial and fractional factorial designs are D-optimal for estimating first-order response surface models for specific numbers and sizes of whole plots.
Abstract: The design of split-plot experiments has received considerable attention during the last few years. The goal of this article is to provide an efficient algorithm to compute D-optimal split-plot designs with given numbers of whole plots and given whole-plot sizes. The algorithm is evaluated and applied to a protein extraction experiment. In addition, it is shown that two-level factorial and fractional factorial designs are D-optimal for estimating first-order response surface models for specific numbers and sizes of whole plots.

Journal ArticleDOI
TL;DR: It is demonstrated that any foldover plan of a 2k−p fractional factorial design is equivalent to a core fold over plan consisting only of the p out of k factors, and it is proved that there are exactly 2K−p foldover plans that are equivalent to any core foldoverPlan of a2k−P design.
Abstract: A commonly used follow-up experiment strategy involves the use of a foldover design by reversing the signs of one or more columns of the initial design Defining a foldover plan as the collection of columns whose signs are to be reversed in the foldover design, this article answers the following question: Given a 2k−p design with k factors and p generators, what is its optimal foldover plan? We obtain optimal foldover plans for 16 and 32 runs and tabulate the results for practical use Most of these plans differ from traditional foldover plans that involve reversing the signs of one or all columns There are several equivalent ways to generate a particular foldover design We demonstrate that any foldover plan of a 2k−p fractional factorial design is equivalent to a core foldover plan consisting only of the p out of k factors Furthermore, we prove that there are exactly 2k−p foldover plans that are equivalent to any core foldover plan of a 2k−p design and demonstrate how these foldover plans can be const

Journal ArticleDOI
TL;DR: This text is useful not only with regards to the statistical methods, but also for the real data examples that can be explored with the various models and methods under study.
Abstract: methods, this was accomplished at a cost—limited text coverage of the nonmainstream topics while directing those interested in more details about the speciŽ c method to the appropriate literature. To some, this might seem like a drawback. I Ž nd it a plus to be made aware of techniques that I might Ž nd useful for some specialized need, but that might not otherwise Ž t into the main discussion in a textbook. I can then follow the list of references to obtain more details. Because my work with repeated measurements includes both linear and nonlinear models, I found this book a good supplement to the text that I regularly use (Davidian and Giltinan 1995) as well as both a supplement and an update to portions of the text by Vonesh and Chichilli (1997). I anticipate including this book when referencing material on repeated measurement methods. For use in a course, I would use it for an applied graduate-level statistics course on linear models for analysis of repeated measurements. This text is useful not only with regards to the statistical methods, but also for the real data examples that can be explored with the various models and methods under study.