scispace - formally typeset
Search or ask a question

Showing papers in "Technometrics in 1976"


Journal ArticleDOI
TL;DR: In this paper, survival distributions for reliability applications in the Biomedical Sciences are discussed, with a focus on the reliability of the distribution of survival distributions in the field of bio-medical applications.
Abstract: (1976). Survival Distributions: Reliability Applications in the Biomedical Sciences. Technometrics: Vol. 18, No. 4, pp. 501-501.

513 citations


Journal ArticleDOI
TL;DR: The restricted maximum likelihood (REML) estimators as discussed by the authors have the property of invariance under translation and the additional property of reducing to the analysis variance estimators for many, if not all, cases of balanced data (equal subclass numbers).
Abstract: The maximum likelihood (ML) procedure of Hartley aud Rao [2] is modified by adapting a transformation from Pattersou and Thompson [7] which partitions the likelihood render normality into two parts, one being free of the fixed effects. Maximizing this part yields what are called restricted maximum likelihood (REML) estimators. As well as retaining the property of invariance under translation that ML estimators have, the REML estimators have the additional property of reducing to the analysis variance (ANOVA) estimators for many, if not all, cases of balanced data (equal subclass numbers). A computing algorithm is developed, adapting a transformation from Hemmerle and Hartley [6], which reduces computing requirements to dealing with matrices having order equal to the dimension of the parameter space rather than that of the sample space. These same matrices also occur in the asymptotic sampling variances of the estimators.

401 citations


Journal ArticleDOI

330 citations


Journal ArticleDOI
TL;DR: In this paper, a unified approach to the study of biased estimators in an effort to determine their relative merits is provided, including simple and generalized ridge estimators, principal component estimators with extensions such as that, proposed by Marquardt [19] and the shrunken estimator proposed by Stein [23].
Abstract: Biased estimators of the coefficients in the linear regression model have been the subject of considerable discussion in the recent, literature. The purpose of this paper is to provide a unified approach to the study of biased estimators in an effort to determine their relative merits. The class of estimators includes the simple and the generalized ridge estimators proposed by Hoerl and Kennard [9], the principal component estimator with extensions such as that, proposed by Marquardt [19] and the shrunken estimator proposed by Stein [23]. The problem of estimating the biasing parameters is considered and illustrated with two examples.

182 citations


Journal ArticleDOI
Abstract: Three general purpose algorithms for maximum likelihood estimation of mean and variance components in mixed analysis of variance models are discussed. These are the Newton-Raphson algorithm, the Fisher scoring algorithm, and the Hemmerle and Hartley algorithm. Derivations for the first two are given and a unified presentation of all three makes some theoretical and practical comparisons possible. In addition the results of applying all three to a sequence of five problems are presented. The W transform of Hemmerle and Hartley is used throughout to reduce the computational burden associated with maximum likelihood variance component algorithms. The algorithms provide a unified approach to estimation and testing in the general mixed analysis of variance model.

161 citations



Journal ArticleDOI
TL;DR: In this article, the authors present theory for optimum plans for accelerated life tests for estimating a simple linear relationship between a stress and product life, which has a normal or lognormal distribution, when the data are to be analyzed before all test units fail.
Abstract: This expository paper presents theory for optimum plans for accelerated life tests for estimating a simple linear relationship between a stress and product life, which has a normal or lognormal distribution, when the data are to be analyzed before all test units fail. Standard plans with equal numbers of test units at equally spaced test stresses are presented and are compared with the optimum plans. While the optimllm plans may not always be robust enough in practice, they indicate that more test units should be run at low stress than at high stress. The plans are illustrated with a temperatllre-accelersted life test of an electrical insulation analyzed with the Arrhenius model.

149 citations


Journal ArticleDOI
TL;DR: In this article, the contrasts of interest are limited to the pairwise comparisons among the means of K samples of equal or unequal sizes, and four normal univariate single-stage multiple comparison procedures are compared for significance levels not exceeding 0.05.
Abstract: For the situation in which the contrasts of interest are limited to the pairwise comparisons among the means of K samples of equal or unequal sizes, four normal univariate single-stage multiple comparison procedures are compared for significance levels not exceeding 0.05: Scheffe's S-method, Dunn's (1, 2] and sidak's [17] improved version of the Bonferroni method, Hochberg's GT2 procedure [8] utilizing the maximum modulus, and Spjovoll and Stoline's T′-method [19]. Rules are given for determining if any method is uniformly preferable (best for all contrasts). Nonuniform preference rules are also proposed and applied to some examples. Auxiliary tables are provided for selecting a method for significance levels 0.01 and 0.05 for several values of v, the number of degrees of freedom of an independent variance estimate, and K. It is shown that the T′-method is uniformly preferable when the sample sizes are “nearly” equal, while one of the other methods will be uniformly preferable when all sample sizes are “s...

146 citations


Journal ArticleDOI
TL;DR: In this paper, a number of hybrid central composite and polyhedral designs have been presented, covering 3, 4, and 6 variables, all of which are at or within one point of minimum.
Abstract: Hybrid designs were created to achieve the same degree of orthogonality as central composite or regular polyhedral designs, to be near-minimum-point, and to be near-rotatable. They resemble central composite designs which have been augmented with at extra variable column. Eight designs are presented covering 3, 4, and 6 variables. All of these are at or within one point of minimum. Characteristics relevant to choice of design are discussed. Efliciencies are compared to central composite or polyhedral designs on n-spheres. A 46 point 7 variable design is also presented which, although it is not near-minimum, is an economical alternative to a 79 point central composite design.

119 citations



Journal ArticleDOI
TL;DR: The performance of a number of response surface designs for estimating a quadratic response surface model in symmetric experimerltal regions, k-spheres and hypercubes, is compared in this article.
Abstract: The performance of a number of response surface designs for estimating a quadratic response surface model in symmetric experimerltal regions, k-spheres and hypercubes, is compared The designs compared are composite designs, Box-Behnken designs, Uniform Shell designs, Hoke designs, Pesotchinsky designs, and Box-Draper designs The performance criteria for the designs are their D-efficiency and their G-efficiency All of the compared designs have high efficiencies For large numbers of factors designs hnving higher efficiencies do exist; however, these designs have not yet been discovered

Journal ArticleDOI
TL;DR: In this paper, the improvement of Latent Root Regression over ordinary least squares is shown to depend on the orientation of the parameter vector with respect to a vector defining the multicollinearity.
Abstract: Miilticollinesrity among the columns of regressor variables is known to cause severe distortion of the least squares estimates of the parameters in a multiple linear regression model. An alternate method of estimating the parameters which was proposed by the authors in a previous paper is Latent Root Regression Analysis. In this article several comparisons between the two methods of estimation are presented. The improvement of Latent Root Regression over ordinary least squares is shown to depend on the orientation of the parameter vector with respect to a vector defining the multicollinearity. Despite this dependence on orientation, the authors conclude that witch multicollinear data Latent Root, Regression Analysis is preferable to ordinary least squares for parameter estimation and variable selectJion.

Journal ArticleDOI
TL;DR: In this paper, the minimum variance unbiased estimator of P(Y < X) has been given for the situation in which X and Y are independently exponentially distributed using the rccacnt results of Blight and Rao.
Abstract: The minimum variance unbiased estimator of P(Y < X) has been given for the situation in which X and Y are independently exponentially distributed. Using the rccacnt results of Blight and Rao [2] the variance of the UMVU estimator is derived. The mean-square error of the maximum likelihood estimator is obtained and used for comparison with the variance of the UMVUE.

Journal ArticleDOI
TL;DR: In this paper, the maximum likelihood estimators of the parameters β and μ (and ) are easily obtained in close form, and the confidence intervals for μ are estimated for the Weibnll Process with Weibler intensity.
Abstract: The Weibnll Process (non-homogenous Poisson process with Weibldl intensity) is being lued as a model for prodtlction learning cllrves [1] and reliability growth of complex systems [2], [3], [4], aud [5]. The maximum likelihood estimators of the parameters β and μ ( and ) are easily obtained in close form. 2n β/ is known to be x 2. In this note percentage points of are presented. SirIce is independent of β and μ these data allow the estimation of confidence intervals for μ.

Journal ArticleDOI
TL;DR: In this paper, a strategy for determining the most important components in a mixture system is developed, which is usefrd in those situations where the number of candidate components, q, is large.
Abstract: A strategy for determining the most important components in a mixture system is developed. The proposed screening designs are usefrd in those situations where the number of candidate components, q, is large. Our simplex screening designs, which contain 2q + 1 or 3q + 1 points, are recommended when it is possible to experiment over the total composition range of all components (0–100%) or the experimental region can be expressed as a simplex in terms of pseudocomponents. We recommend extreme vertices screening designs, which contain approximately q + 10 points, when some or all of the components are subject to upper and lower constraints. Examples are included to illustrate the proposed designs and associated analyses.

Journal ArticleDOI
TL;DR: Gaussian populations and five algorithms are studied: linear discrimination with urlknown means and known covariance, lineardiscrimination with unknown means and unknown covariances, quadratic discrimination with unknown covariansces, and two nonparametric Bayes-type algorithms having density estimates using different, kernels (Gaussian and Cauchy).
Abstract: Given fixed numbers of labeled objects on which training data can be obtained, how many variables should be used for a particular discriminant algorithm? This, of course, cannot be answeredin general since it depends on the characteristics of the populations, the sample sizes, and the algorithm. Some insight is gained in this article by studying Gaussian populations and five algorithms: linear discrimination with urlknown means and known covariance, linear discrimination with unknown means and unknown covariances, quadratic discrimination with unknown covariances and two nonparametric Bayes-type algorithms having density estimates using different, kernels (Gaussian and Cauchy).

Journal ArticleDOI
TL;DR: In this article, the efects of optimality and suboptimality of control, the addition of a known dither signal, and the dead time in the system are analyzed. But the authors focus on the control equation.
Abstract: Data often mllst be collected under regulatory feedback control. After the form of the stochastic-dynamic model has been tentatively identified, estimates of system parameters are required from the data. It is important to separate the following distinct problems: (A) estimation of the dynamic and stochastic parameters of the system (B) estimation of only those functions of these parameters which occur in the control equation. This paper considers and illustrates for each problem the efects of (i) optimality and suboptimality of control, (ii) addition of a known dither signal, (iii) dead time in the system. Necessary and suflicient conditions for estimability using data collected under conditions of optimal control are given.

Journal ArticleDOI
TL;DR: A general method is proposed for taking personal case histories into consideration when planning an expcrimrnt and introduces Monte Carlo like techniques into the design of cxperimcnts.
Abstract: Whenever the observations in an experimrnt must be made in somr time order sequence, there is a substantial likelihood that there may bc somo time order dcpendency in the results. In addition, it is frequently cxprnsivr to change levels for one or more of the factors in thr study. In this paper these two facts are documcntcd with personal case histories and a general method is proposed for taking these difficulties into consideration when planning an expcrimrnt. The method introduces Monte Carlo like techniques into the design of cxperimcnts.

Journal ArticleDOI
TL;DR: In this paper, the problem of screening on a random variable correlated with the performance variable to increase acceptable product is considered, where the problem is considered of screening the random variable on a bivariate normal model.
Abstract: The problem is considered of screening on a random variable correlated with the performance variable to increase acceptable product. Consider a stockpile with a given proportion meeting a set quality standard and the necessity to upgrade the stockpile by testing on a variable correlated with the variable of interest. For example, stcppose lifetime is the variable of interest. Clearly it would be useless to try to screen on the lifetime variable itself. However, if a variable correlated with lifetimes is available and a bivariate normal model is assumed then much can be done. If all of the paramctcrs are known, the solution is given by Owen, McIntire and Seymour [12], but if some of the parameters are unknown then some procedures for handling the problem are given here. Ptlost of the previous work in the area of selection by a correlated variate is concerned with personnel selection and animal improvement. As such, it is widely scattered. Fol example, Owen [9], p. XLIV, contains a bibliography which contai...

Journal ArticleDOI
TL;DR: In this article, the problem of estimating coefficients and initial values in a system of linear differential equations from observations on linear combinations of the system's responses is addressed using the Gauss-Newton algorithm.
Abstract: The problem of estimating coefficients and initial values in a system of linear differential equations from observations on linear combinations of the system's responses is addressed. Using the Gauss-Newton algorithm, the reqllired function values are obtained by expressing the system's solution in terms of the eigenvalues and eigenvectors of its coefficient matrix aud its initial values. Differentiating this solution gives expressious for the required function derivatives in terms of these same eigenvalues and eigenvectors. The advantage of this approach is that it, uses exact analytic expressions for the required function values and derivatives rather than resorting to numerical integration or secants. An application to compartment, analysis is considered aud results are compared with those obtained by using the SAAM program of Berman and Weiss.

Journal ArticleDOI
TL;DR: In this article, the analysis of two-way layout data with interaction and one observation pereell for the model is discussed, and many new tables of eritical values are presented in order to test for treatment effects.
Abstract: The analysis of two-way layout data with interaction and one observation pereell for the model is discussed. approximate one-sided confidence intervales for the error variance re given and many new tables of eritical values are presented in order to test for treatment effects. Also presented are eritical points for testing θ2 = 0 he model.

Journal ArticleDOI
TL;DR: In this paper, it was shown that multicomponent exponential decays can be analyzed using a technique which produces a spectrum whose peaks correspond in amplitude and position to the various exponential components present in the data.
Abstract: It is shown that multicomponent exponential decays can be analysed using a technique which produces a spectrum whose peaks correspond in amplitude and position to the various exponential components present in the data. This technique is analogous to the Fourier Transform which provides a spectrum of the frequency components (complex exponentials) of a signal. Three techniques—the Orthonormal Exponential Transform, the Inverse Laplace Transform and the Gardner Transform are examined and their relative effectiveness in producing the desired spectra from theoretically generated and experimental data is discussed. It is shown that the updated Gardner Transform can be Ilsed to analyse experimental multicomponent exponential decays.

Journal ArticleDOI
TL;DR: In this article, the authors discuss testing hypotheses and interval estimation for the mean of the first passage time distribution in Brownian motion with positive drift (inverse Gaussian distribution) and derive the optimum test procedures and confidence intervals for both one-sided and two-sided cases.
Abstract: In this paper we discuss testing hypotheses and interval estimation for the mean of the first passage time distribution in Brownian motion with positive drift (inverse Gaussian distribution). Optimum test procedures and confidence intervals for both one-sided and two-sided cases are derived in their exact forms, which utilize the percentage points of standard normal and Student's t distributions. In thecase of an hypothesis with a two-sided alternative, it is shown that a uniformly most powerful (UMP) unbiased test is simply a two-tailed normal test if the nuisance parameter is known, and a two-tailed Student's t test if it is unknown.

Journal ArticleDOI
Byron Jones1
TL;DR: In this article, an algorithm for deriving optimal connected block designs is proposed, where the method employed is to improve a given design by interchanging treatments between blocks, the treatment replications being kept fixed.
Abstract: An algorithm is proposed for deriving optimal connected block designs. The method employed is to improve a given design by interchanging treatments between blocks, the treatment replications being kept fixed. Examples illustrating the use of the algorithm are given and its performance is discussed.

Journal ArticleDOI
TL;DR: In this article, a class of relative growth rate models is defined which includes the linear, modified exponential and logistic growth curves as special eases, and the ternd curve for each model is obtained by integration and approximate-confidence limits can be obtained for the forecasts of future series values.
Abstract: Many annual time series in socioeconomic systems are steadily increasing functions of time. This paper deals with an empirical approach to analyzing and projecting such trending time series from models of relative growth rates or percent changes. A class of relative growth rate models is defined which includes the linear. exponential, modified exponential and logistic growth curves as special eases. Parameters are estimated for the most part by linear regression techniques since the, relative growth rates for this class of models are linear in the parameters. The ternd curve for each model is obtained by integration and approximate-confidence limits can be obtained for the forecasts of future series values.

Journal ArticleDOI
TL;DR: In this paper, the authors consider the choice of the sampling interval for use in discrete regulatory control of processes subject to stochastic disturbances where the purpose is to maintain the process output as close as possible to some fixed target value.
Abstract: This paper is concerned with the choice of the sampling interval for use in discrete regulatory control of processes subject to stochastic disturbances where the purpose is to maintain the process output as close as possible to some fixed target value. The analysis is restricted to single input-single output systems sampled at discrete equispaced intervals of time. Assuming that a discrete linear dynamic-stochastic model of the system has been identified from data collected at one sampling interval the question which often arises and to which this paper is addressed is: “How much worse off would one be (in the sense of one's ability to control the process output) if the processwere sampled less frequently?” By showing how the form and parameters of the dynamic-stochastic models for the system will change as the sampling interval is increased to integer multiples of the basic interval, one is able to predict the performance of the optimal stochastic controller at these larger intervals and thereby make a r...

Journal ArticleDOI
TL;DR: In this paper, a method of constructing consistent estimators of optimum age replacement intervals from a random sample of lifetimes is given, which depends on the availability of a uniformly strongly consistent estimator of the underlying distribution function.
Abstract: A method of constructing consistent estimators of optimum age replacement intervals from a random sample of lifetimes is given. The method depends on the availability of a uniformly strongly consistent estimator of the underlying distribution function. Such estimators considered are the MLE and MVUE of the Weibull and Gamma distribution functions with unknown scale parameters, the empirical distribution function, and the MLE under the restriction of increasing failure rate. Monte Carlo studies suggest that, in the parametric cases, the MLE is nearly as good as the MVUE unless the ssmple size is quite small and the population variance large. The estimates of the optimum age replacement interval in the nonparametric cases rarely differ, indicating that one may use the empirical distribution frmction with no serious loss of information.

Journal ArticleDOI
TL;DR: In this paper, it is proposed that estimation be approached from this view rather than from that of population inference based on assumed model families, since much of the applied statistician's activity involves model fitting.
Abstract: Since much of the applied statistician's activity involves model fitting, it is proposed that estimation be approached from this view rather than from that of population inference based on assumed model families. To this end, estimation by the inversion of goodness of fit tests for completely specified models is considered. Several examples and some initial simulation results are given.

Journal ArticleDOI
TL;DR: One-sided β-content tolerance intervals for the two-parameter exponential distribution are considered in this paper, where the reliability of the reliability depends upon factors few of which were previously available.
Abstract: One-sided β-content tolerance intervals for the two-parameter exponential distribution are considered. The tolernace limits depends upon factors few of which were previously available. Equations whose solutions are the tolerance factors are derived and a table of factors is presented. It is shown that the factors can be obtained with a desk calculator and standard tables. Relationship to confidence limits for the reliability is discussed.

Journal ArticleDOI
TL;DR: In this article, a fixed point solution of the iterative process underlying Farebrother's MSE estimator is discussed, and a simulation study favors Stein's and their fixed point solutions over ordinary least, squares and Farebrothers estimator.
Abstract: We discuss a fixed point solution of the iterative process underlying Farebrother's “minimum mean squared error (MSE) estimator.” The similarity between our fixed point solution and Stein's shrunken least squares estimator is striking. A simulation study favors Stein's and our fixed point solution over ordinary least, squares and Farebrother's estimator.