scispace - formally typeset
Search or ask a question

Showing papers in "Technometrics in 1982"


Journal ArticleDOI
TL;DR: In this article, the authors proposed a regression model for failure time distributions in the context of counting process models and showed that the model can be used to estimate the probability of failure.
Abstract: Preface.1. Introduction.1.1 Failure Time Data.1.2 Failure Time Distributions.1.3 Time Origins, Censoring, and Truncation.1.4 Estimation of the Survivor Function.1.5 Comparison of Survival Curves.1.6 Generalizations to Accommodate Delayed Entry.1.7 Counting Process Notation.Bibliographic Notes.Exercises and Complements.2. Failure Time Models.2.1 Introduction.2.2 Some Continuous Parametric Failure Time Models.2.3 Regression Models.2.4 Discrete Failure Time Models.Bibliographic Notes.Exercises and Complements.3. Inference in Parametric Models and Related Topics.3.1 Introduction.3.2 Censoring Mechanisms.3.3 Censored Samples from an Exponential Distribution.3.4 Large-Sample Likelihood Theory.3.5 Exponential Regression.3.6 Estimation in Log-Linear Regression Models.3.7 Illustrations in More Complex Data Sets.3.8 Discrimination Among Parametric Models.3.9 Inference with Interval Censoring.3.10 Discussion.Bibliographic Notes.Exercises and Complements.4. Relative Risk (Cox) Regression Models.4.1 Introduction.4.2 Estimation of beta.4.3 Estimation of the Baseline Hazard or Survivor Function.4.4 Inclusion of Strata.4.5 Illustrations.4.6 Counting Process Formulas. 4.7 Related Topics on the Cox Model.4.8 Sampling from Discrete Models.Bibliographic Notes.Exercises and Complements.5. Counting Processes and Asymptotic Theory.5.1 Introduction.5.2 Counting Processes and Intensity Functions.5.3 Martingales.5.4 Vector-Valued Martingales.5.5 Martingale Central Limit Theorem.5.6 Asymptotics Associated with Chapter 1.5.7 Asymptotic Results for the Cox Model.5.8 Asymptotic Results for Parametric Models.5.9 Efficiency of the Cox Model Estimator.5.10 Partial Likelihood Filtration.Bibliographic Notes.Exercises and Complements.6. Likelihood Construction and Further Results.6.1 Introduction.6.2 Likelihood Construction in Parametric Models.6.3 Time-Dependent Covariates and Further Remarks on Likelihood Construction.6.4 Time Dependence in the Relative Risk Model.6.5 Nonnested Conditioning Events.6.6 Residuals and Model Checking for the Cox Model.Bibliographic Notes.Exercises and Complements.7. Rank Regression and the Accelerated Failure Time Model.7.1 Introduction.7.2 Linear Rank Tests.7.3 Development and Properties of Linear Rank Tests.7.4 Estimation in the Accelerated Failure Time Model.7.5 Some Related Regression Models.Bibliographic Notes.Exercises and Complements.8. Competing Risks and Multistate Models.8.1 Introduction.8.2 Competing Risks.8.3 Life-History Processes.Bibliographic Notes.Exercises and Complements.9. Modeling and Analysis of Recurrent Event Data.9.1 Introduction.9.2 Intensity Processes for Recurrent Events.9.3 Overall Intensity Process Modeling and Estimation.9.4 Mean Process Modeling and Estimation.9.5 Conditioning on Aspects of the Counting Process History.Bibliographic Notes.Exercises and Complements.10. Analysis of Correlated Failure Time Data.10.1 Introduction.10.2 Regression Models for Correlated Failure Time Data.10.3 Representation and Estimation of the Bivariate Survivor Function.10.4 Pairwise Dependency Estimation.10.5 Illustration: Australian Twin Data.10.6 Approaches to Nonparametric Estimation of the Bivariate Survivor Function.10.7 Survivor Function Estimation in Higher Dimensions.Bibliographic Notes.Exercises and Complements.11. Additional Failure Time Data Topics.11.1 Introduction.11.2 Stratified Bivariate Failure Time Analysis.11.3 Fixed Study Period Survival Studies.11.4 Cohort Sampling and Case-Control Studies.11.5 Missing Covariate Data.11.6 Mismeasured Covariate Data.11.7 Sequential Testing with Failure Time Endpoints.11.8 Bayesian Analysis of the Proportional Hazards Model.11.9 Some Analyses of a Particular Data Set.Bibliographic Notes.Exercises and Complements.Glossary of Notation.Appendix A: Some Sets of Data.Appendix B: Supporting Technical Material.Bibliography.Author Index.Subject Index.

3,596 citations


Journal ArticleDOI

538 citations


Journal ArticleDOI
TL;DR: The method is based on successively predicting each element in the data matrix after deleting the corresponding row and column of the matrix, and makes use of recently published algorithms for updating a singular value decomposition.
Abstract: A method is described for choosing the number of components to retain in a principal component analysis when the aim is dimensionality reduction. The correspondence between principal component analysis and the singular value decomposition of the data matrix is used. The method is based on successively predicting each element in the data matrix after deleting the corresponding row and column of the matrix, and makes use of recently published algorithms for updating a singular value decomposition. These are very fast, which renders the proposed technique a practicable one for routine data analysis.

364 citations


Journal ArticleDOI

318 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present a survey of the application of correlation and spectral analysis in the engineering applications of Correlation and Spectral Analysis (CSA) in the context of correlation analysis.
Abstract: (1994). Engineering Applications of Correlation and Spectral Analysis. Technometrics: Vol. 36, No. 2, pp. 220-221.

286 citations


Journal ArticleDOI
TL;DR: The Theory of Linear Models and Multivariate Analysis (TLMSA) as discussed by the authors is a theory of linear models and multivariate analysis for linear models that is based on linear models.
Abstract: (1982). The Theory of Linear Models and Multivariate Analysis. Technometrics: Vol. 24, No. 3, pp. 250-250.

265 citations


Journal ArticleDOI
TL;DR: In this paper, a branch-and-bound algorithm is presented to construct a catalog of all D-optimal n-point designs for specified design region, linear model, and number of observations, n. While the primary design criterion is D optimality, the algorithm may also be used to find designs performing well by other secondary criteria, if a small sacrifice in ellkiency as measured by D optimahty is accepted.
Abstract: This article presents a branch-and-bound algorithm that constructs a catalog of all D-optimal n-point designs for specified design region, linear model, and number of observations, n. While the primary design criterion is D optimality, the algorithm may also be. used to find designs performing well by other secondary criteria, if a small sacrifice in ellkiency as measured by D optimahty is accepted. Finally, some designs are supplied for a quadratic response surface model.

163 citations


Journal ArticleDOI
TL;DR: In this paper, the authors considered the Weibull Process (a nonhomogeneous Poisson process with intensity r(t) = λβt β−1) as a stochastic model for the Duane (1964) reliability growth postulate.
Abstract: The Weibull Process (a nonhomogeneous Poisson process with intensity r(t) = λβt β−1) is considered as a stochastic model for the Duane (1964) reliability growth postulate. Under this model the mean time between failure (MTBF) for the system at time t is given by M(t) = [r(t)]−1. Small sample and asymptotic confidence intervals on M(t) are discussed for failure- and time-truncated testing. Tabled values to compute the confidence intervals and numercial examples illustrating these procedures are presented.

153 citations


Journal ArticleDOI
TL;DR: The Enterprise of Knowledge as mentioned in this paper is a major conceptual and speculative philosophic investigation of knowledge, belief, and decision, which offers a distinctive approach to the improvement of knowledge where knowledge is construed as a resource for deliberation and inquiry.
Abstract: This book presents a major conceptual and speculative philosophic investigation of knowledge, belief, and decision. It offers a distinctive approach to the improvement of knowledge where knowledge is construed as a resource for deliberation and inquiry.The first three chapters of the book address the question of the revision of knowledge from a highly original point of view, one that takes issue with the fallibilist doctrines of Peirce and Popper, and with the views of Dewey, Quine, and Kuhn as well.The next ten chapters are more technical in nature but require relatively little background in mathematical technique. Among the topics discussed are inductive logic and inductive probability, utility theory, rational decision making, value conflict, chance (statistical probability), direct inference, and inverse inference.Chapters 14-17 review alternative approaches to the topic of inverse statistical inference. Much of the discussion focuses on contrasting Bayesian and anti-Bayesian reactions to R. A. Fisher's fiducial argument. This section of the book concludes with a discussion of the Neyman-Pearson-Wald approach to the foundations of statistical inference.The final chapter returns to the epistemological themes with which the book opened, emphasizing the question of the objectivity of human inquiry. An appendix provides a real-world application of Levi's theories of knowledge and probability, offering a critique of some of the methodological procedures employed in the Rasmussen Report to assess risks of major accidents in nuclear power plants. There are also references and an index."The Enterprise of Knowledge" will interest professionals and students in epistemology, philosophy of science, decision theory, probability theory, and statistical inference.

129 citations


Journal ArticleDOI
TL;DR: In this article, a new design criterion is developed that generalizes linear optimality to a situation in which, a priori, the exact form of the regression model need not be known.
Abstract: Motivated by a pioblem concerning the estimation of uranium content in calibration standards, a new design criterion is developed that generalizes linear optimality to a situation in which, a priori, the exact form of the regression model need not be known. An equivalence theorem is given, some properties of the new designs are delineated, and iterative methods for design construction are presented. Finally, by using methods developed within, optimal designs are computed in a number of situations, including that of the motivating example.

99 citations


Journal ArticleDOI
TL;DR: In this article, Statistical Analysis: A Computer-Oriented Approach (2nd Ed.) Technometrics: Vol. 24, No. 3, pp. 249-250.
Abstract: (1982). Statistical Analysis: A Computer Oriented Approach (2nd Ed.) Technometrics: Vol. 24, No. 3, pp. 249-250.

Journal ArticleDOI
TL;DR: In this article, a method for selecting the member of a collection of families of distributions that best fit a set of observations is given, which is essentially the value of the density function of a scale transformation maximal invariant.
Abstract: A method is given for selecting the member of a collection of families of distributions that best fits a set of observations. This method requires a noncensored set of observations. The families considered include the exponential, gamma, Weibull, and lognormal. A selection statistic is proposed that is essentially the value of the density function of a scale transformation maximal invariant. Some properties of the selection procedures based on these statistics are stated, and results of a simulation study are reported. A set of time-to-failure data from a textile experiment is used as an example to illustrate the procedure, which is implemented by a computer program.

Journal ArticleDOI
TL;DR: In this article, the number of center points in a second-order response surface design is discussed and an integrated variance criterion is offered, and the results are compared and discussed and indicate that fewer center points than initially recommended are generally appropriate.
Abstract: Previous approaches for selecting the number of center points in a second-order response surface design are explained, and an integrated variance criterion is offered. Calculations extending results in the literature are made for various composite and Box-Behnken designs for 2 ≤ k ≤ 8. The results are compared and discussed and indicate that fewer center points than initially recommended are generally appropriate.



Journal ArticleDOI
TL;DR: In this article, a generalized cubic splining algorithm was used to evaluate recursively defined convolution integrals for a wide variety of distribution functions, including the renewal function, the variance function, and the integral of the renewal functions for five distributions (gamma, inverse Gaussian, lognormal, truncated normal, and Weibull).
Abstract: A generalized cubic splining algorithm enables us to evaluate recursively-defined convolution integrals for a wide variety of distribution functions. This algorithm has been used to evaluate the renewal function, the variance function, and the integral of the renewal function for five distributions (gamma, inverse Gaussian, lognormal, truncated normal, and Weibull) for a wide range of values of the shape parameter of each. The results of the computations are described and a comparison is made with previous tabulations.

Journal ArticleDOI
TL;DR: In this paper, the problem of the identification of two unknown chemical compounds and the estimation of their proportions in a set of unknown mixtures of the two compounds, given data that are vectors of measurements on their mixtures is considered.
Abstract: The problem considered is the identification of two unknown chemical compounds and the estimation of their proportions in a set of unknown mixtures of the two compounds, given data that are vectors of measurements on their mixtures. It is assumed that the expected value of a mixture vector is an unknown convex linear combination of two unknown component vectors and least squares estimation is used to obtain a set of possible solutions of the mixing proportions and the component vectors. Obtaining a unique solution requires additional constraints or information. The solution set is interpreted geometrically and examples involving amino acids and light absorbance data are given.

Journal ArticleDOI
TL;DR: In this article, a method of comparing the performance of estimators of location is developed and applied to a series of historical data sets in the physical sciences and to a collection of modern analytical-chemistry data sets.
Abstract: Although there is substantial literature on robust estimation, most scientists continue to employ traditional methods. They remain skeptical about the practical benefit of employing robust techniques and doubt the realism of the long-tailed error distributions commonly employed by their proponents in Monte Carlo studies. In this article a method of comparing the performance of estimators of location is developed and applied to a series of historical data sets in the physical sciences and to a collection of modern analytical-chemistry data sets. Both sets of results suggest that either severely trimmed means or modern robust estimators are required for optimal efficiency.

Journal ArticleDOI
TL;DR: In this paper, a new mixture component effect measure is presented and compared to previously suggested measures, which incorporates more information concerning the size, shape, and location of the constraint region than do the previous suggestions.
Abstract: In a mixture experiment, the response to a mixture of q components is a function of the proportions x 1, x 2, …, x q of components in the mixture. The proporitons satisfy the constraint Σx i = 1, and the nature of a particular situation may impose other restrictions. The problem considered is the measurement of the effect each component has on the response. A new mixture component effect measure is presented and compared to previously suggested measures. This new measure incorporates more information concerning the size, shape, and location of the constraint region than do the previous suggestions. A distinction between partial and total effects is made, and the uses of these effects in modifying and interpreting mixture response prediction equations are considered. The methods of the article are illustrated in an example from a glass development study in a waste vitrification program.



Journal ArticleDOI
TL;DR: In this article, a modeling procedure for multiple linear regression is proposed, which is based on preliminary interior and global analyses, and uses information from that analysis as a guide in the selection of methods to achieve the objective.
Abstract: A modeling procedure for multiple linear regression is proposed. This procedure begins with preliminary interior and global analyses. The global analysis is based on a form of canonical analysis of the sample correlation matrix of all variables, and, depending on the regression objective, the procedure uses information from that analysis as a guide in the selection of methods to achieve the objective. The two objectives discussed are prediction and exploration of structure. The dependence of the choice of methods on the regression objective is illustrated on a “benchmark” data set, and the results obtained by our approach are compared with published results obtained by other methods. The procedure suggested is particularly useful for data sets with large numbers of explanatory variables that render more conventional methods more expensive, less flexible, or less informative concerning relationships among variables.

Journal ArticleDOI
TL;DR: In this article, the classical and inverse point estimators in statistical calibration can be obtained by direct or inverse regression and are to some extent supported by the maximum likelihood and Bayesian approaches, respectively.
Abstract: The competing classical and inverse point estimators in statistical calibration can be obtained by direct or inverse regression and are to some extent supported by the maximum likelihood and Bayesian approaches, respectively. Both of these approaches depend on specific distributional assumptions, but these assumptions confuse the main issue because both estimators can be justified without reference to them. By using a compound estimation approach, it is shown that the classical estimator can be derived as a linear compound estimator satisfying the criterion of asymptotic unbiasedness, while the inverse estimator is a linear compound estimator without the unbiasedness constraint. This formulation requires neither specific distributional assumptions nor reference to direct or inverse regression. Assessments of the two estimators are made in terms of their performance in estimating the current x value. It is shown that superiority of the inverse estimator can only be guaranteed if the current x value is samp...

Journal ArticleDOI
TL;DR: In this article, the authors describe situations in which such inferences are meaningful, give examples of their use, and provide a table of constants needed to implement such multiple comparison procedures, which can also be used for statistically legitimate "data snooping" to help decide which contrasts within a specified set warrant further study.
Abstract: In many experimental situations the pertinent inferences are made on the basis of orthogonal contrasts among the treatment means (as in 2 n factorial experiments). In this setting a particularly useful form of inference is one involving multiple comparisons. The present article describes situations in which such inferences are meaningful, gives examples of their use, and provides a table of constants needed to implement such multiple comparison procedures. The procedures can also be used for statistically legitimate “data snooping” (in the sense ofScheffe 1959, p. 80) to help decide which contrasts within a specified set warrant further study.


Journal ArticleDOI
TL;DR: In this paper, a Bayesian predictive density is obtained using a vague prior, from which one- or two-sided Bayesian prediction intervals can be determined from both a frequentist and Bayesian viewpoint.
Abstract: Prediction intervals for the inverse Gaussian are obtained from both a frequentist and a Bayesian viewpoint. The frequentist intervals are obtained by constructing pivotals that have the x 2 and F distributions. The method involves inversion of probability statements, which results in two-sided prediction intervals. A Bayesian predictive density is obtained using a vague prior, from which one- or two-sided Bayesian prediction intervals can be determined. An example for which Bayesian prediction limits are narrower than the frequentist is given.

Journal ArticleDOI
TL;DR: It is shown that the method can often overcome some difficulties inherent in the traditional smoothed periodogram and autoregressive spectral-estimation methods and that additional insights into the structure of a multiple time series can be obtained by using periodic autoregressions.
Abstract: A new method of estimating the spectral density of a multiple time series based on the concept of periodically stationary autoregressive processes is described and illustrated. It is shown that the method can often overcome some difficulties inherent in the traditional smoothed periodogram and autoregressive spectral-estimation methods and that additional insights into the structure of a multiple time series can be obtained by using periodic autoregressions.

Journal ArticleDOI
TL;DR: In this paper, first-order and second-order models are compared from the point of view of their ability to detect certain likely kinds of lack of fit of degree one higher than has been fitted.
Abstract: Some first-order (2 k–p two-level factorials and fractional factorials plus center points) and second-order (cube plus star plus center-point composite) response surface designs are discussed from the point of view of their ability to detect certain likely kinds of lack of fit of degree one higher than has been fitted. This leads to consideration of conditions for representationa adequacy of first- and second-order models in transformed predictor variables. It is shown how to use the estimated regression coefficients from the higher degree model to check if power transformations of the predictor variables could eliminate the lack of fit, and also actually to estimate the transformations.

Journal ArticleDOI
TL;DR: A new diagnostic is proposed: the median of the tetrads associated with a cell, which can efficiently identify multiple outliers in a single step in a two-way table.
Abstract: Using residuals to isolate multiple outhers in a two-way table may lead to very poor results. A new diagnostic is proposed: the median of the tetrads associated with a cell. This procedure can efficiently identify multiple outliers in a single step. The use of a half-normal plot provides a good indicator of the actual number of outhers in the table.

Journal ArticleDOI
TL;DR: In this article, statistics are proposed to help identify the orders of autoregressive-moving average models, and large-sample variances of the statistics are derived and small-sample properties are explored by simulation.
Abstract: Statistics are proposed to help identify the orders of autoregressive-moving average models. These are functions of the sample autocorrelations and include partial autocorrelations as special cases. The large-sample variances of the statistics are derived and small-sample properties are explored by simulation. Their interpretation is illustrated using Wolfer's annual sunspot numbers and wind velocity data (Cleveland 1972).