scispace - formally typeset
Search or ask a question

Showing papers in "Biometrics in 1972"


Journal ArticleDOI

1,148 citations



Journal ArticleDOI
TL;DR: This classic work has a dual aim: to describe relevant mathematical techniques and to analyse the single server queue and its most important variants.

944 citations


Journal ArticleDOI
TL;DR: An overview of concepts and techniques pertaining to (i) the robust estimation of multivariate location and dispersion; (ii) the analysis of two types of multidimensional residuals; and (iii) the detection of multiresponse outliers.
Abstract: SUMMARY The paper gives an overview of concepts and techniques pertaining to (i) the robust estimation of multivariate location and dispersion; (ii) the analysis of two types of multidimensional residuals-namely those that occur in the context of principal components analysis as well as the more familiar residuals associated with least squares fitting; and (iii) the detection of multiresponse outliers. The emphasis is on methods for informal exploratory analysis and the coverage is both a survey of existing techniques and an attempt to propose, tentatively, some new methodology which needs further investigation and development. Some examples of use of the methods are included.

793 citations


Journal ArticleDOI
TL;DR: In this article, a method of plotting data of more than two dimensions is proposed, where each data point, x = (xi, *, xk), is mapped into a function of the form fx(t) = xl/ v/2 + x2 sin t + x3 cos t + X4 sin 2t + x5 cos 2t+, and the function is plotted on the range - 7r < t < 7r.
Abstract: SUMMARY A method of plotting data of more than two dimensions is proposed. Each data point, x = (xi, * , xk), is mapped into a function of the form fx(t) = xl/ v/2 + x2 sin t + x3 cos t + X4 sin 2t + x5 cos 2t + , and the function is plotted on the range - 7r < t < 7r. Some statistical properties of the method are explored. The application of the method is illustrated with an example from anthropology.

708 citations


Journal ArticleDOI
TL;DR: In this article, the authors reviewed the importance of errors in judgment ordering in the ranked set sampling method and compared it to random sampling and the average of the mean of a set of elements.
Abstract: Ranked set sampling employs judgment ordering to obtain an estimate of a population mean. The method is most useful when the measurement or quantification of an element is difficult but the elements of a set of given size are easily drawn and ranked with reasonable success by judgment. In each set all elements are ranked but only one is quantified. Sufficient sets are processed to yield a specified number of quantified elements and a mean for each rank. The average of these means is an unbiased estimate of the population mean regardless of errors in ranking. Precision relative to random sampling, with the same number of units quantified, depends upon properties of the population and success in ranking. In this paper the ranked set concept is reviewed with particular consideration of errors in judgment ordering.

643 citations


Journal ArticleDOI
TL;DR: In this paper, a test procedure was proposed for the case when all treatments are equally replicated and the primary purpose of this paper is to give results which enable the test to be used when the treatment replications are unequal.
Abstract: The comparison of dose treatment means with a control mean to determine the lowest dose level at which there is evidence for a difference from control was discussed recently (Williams [1971]). A test procedure was proposed for the case when all treatments are equally replicated. The primary purpose of this paper is to give results which enable the test to be used when the treatment replications are unequal. A two-sided version of the test is also described and the Type I errors of the test are discussed.

636 citations




Journal ArticleDOI
TL;DR: C Cohort mortality data is represented by a probabilistic combination of competing risks (diseases) described by an age-at-death distribution and a net probability of occurrence, illustrated by a set of pathology data from a well-controlled laboratory animal experiment.
Abstract: SUMMARY Cohort mortality data is represented by a probabilistic combination of competing risks (diseases). Each risk is described by an age-at-death distribution and a net probability of occurrence. This representation is illustrated by a set of pathology data from a well-controlled laboratory animal experiment. human health and various environmental pollutants has focused mounting attention on the analysis of mortality data. Generally, the usual actuarial methods have not been concerned as much with individual diseases as with the construction of life tables and general mortality rates (Seal [1954], Grenander [1956], Kimball [1960]). However, in the case of the well-controlled laboratory animal experiment or epidemiological study, the investigator is often concerned primarily with the effects which a certain treatment (exposure to pollutants) has upon the occurrence of a few specific terminal diseases (causes of death). He may, for example, use smog as his pollutant being primarily interested in the occurence of lung tumors. When comparing his data with that from a control group, it could happen that the observed incidence of lung tumors is lower in the treated group. This situation could occur if the treatment caused a generally lower age at death throughout the population. In this manner, sufficient opportunity is not allowed for the development of lung tumors, which have a tendency of occurring in older animals. In any case, it is easy to imagine some of the possible difficulties in the interpretation of this type of data and for further discussion the reader is referred to the paper by Kimball [1958]. We shall assume that we have a complete set of cohort autopsy data giving both the age and cause of death for a population. The particular data used for illustration in this paper were obtained from a well-controlled laboratory experiment. Using these data we wish to describe both the incidence and age at death for each particular cause of death. Along this line

209 citations



Journal ArticleDOI
TL;DR: Although the variance-covariance structure of the responses depends on the heritability, it is found that for most relevant combinations of parameters these linear estimators ale almost as efficient as a maximum likelihood (ML) estimator, and can be recommended for practical use.
Abstract: Methods of estimating realised heritability from selection experiments are compared. For designs in which divergent selection is practiced, formulae are given for the sampling variance of some simple linear estimators of realised heritability, such as the regression of cumulative response on cumulative selection differential. Although the variance-covariance structure of the responses depends on the heritability, it is found that for most relevant combinations of parameters these linear estimators ale almost as efficient as a maximum likelihood (ML) estimator, and can be recommended for practical use. Standard methods of calculating the variance of these estimators are shown to be very biased, downwards for the regression of cumulative response on cumulative selection differential. Methods of estimatinlg the variance from experimen-tal data, which are almost unbiased, are described.


Journal ArticleDOI
TL;DR: This paper reviews some of the more important work on allometry and related problems and suggests that at least one of these is likely to be more useful biologically in a certain type of study than the conventional approach through allometry.
Abstract: SUMMARY Allometry is the study of differences in shape associated with size. The bivariate case has been studied intensively for nearly fifty years, with attention devoted almost exclusively to the so-called simple allometry equation which implies a linear relationship between logarithms of the two size measurements. Multivariate generalizations of the bivariate concepts present difficulties that have not been fully resolved. These concern questions about appropriate generalizations from 2 to p dimensions. This paper reviews some of the more important work on allometry and related problems. Comments on the various approaches are made and some suggestions are given for future research. Alternative approaches to studies of size and shape are briefly mentioned, and it is suggested that at least one of these is likely to be more useful biologically in a certain type of study than the conventional approach through allometry.



Journal ArticleDOI
TL;DR: In this paper, x2 tests are developed for homogeneity of 7, 7, and p. These tests utilize matched sample x2 results of the author (Bennett [1967, 1968]).
Abstract: Specificity (Q), senisitivity (Xq) and predictive value (p) have been proposed as measuies of the effectiveness of each of say t diagnostic procedures in the detection of a disease D (e.g., Thorner and Remein [1961]; Vecchio, [1966]). It is assumed that a definitive result can be obtained on the presence or absence of D in each of a series of n patients together with their diagnoses by the t procedures. In this paper x2 tests are developed for homogeneity of 7, 7, and p. These utilize matched sample x2 results of the author (Bennett [1967; 1968]).

Journal ArticleDOI
TL;DR: This article reviews the set of nine "condensed identity coefficients" and their use to establish equations which give the "genotype structure" of an individual, given the genotype of one of his relatives and the gene structure of the population.
Abstract: SUMMARY What genetic information concerning an individual can be provided by the knowledge of the genetic make-up of one of his relatives? In general, the well-known "coefficient of kinship" is not sufficient to solve the problem. It is necessary to use a more complete measure of kinship, the set of nine "condensed identity coefficients." This article reviews these coefficients and their use to establish equations which give the "genotype structure" (i.e. the set of probabilities of the various genotypes) of an individual, given the genotype of one of his relatives and the gene structure of the population. When the ancestry network is simple enough, these equations may be put in a form in which only the coefficient of kinship appears. But the conditions for such a simplification are restrictive: not only must both relatives not be inbred, but also the kinship between them must be unilineal.

Journal ArticleDOI
TL;DR: In this paper, a two-compartment model for the passage of particles through the gastro-intestinal tract of ruminants is proposed, where a gamma distribution of lifetimes is introduced in the first compartment; thereby, passage from that compartment becomes time-dependent.
Abstract: A new two-compartment model for the passage of particles through the gastro-intestinal tract of ruminants is proposed. In this model, a gamma distribution of lifetimes is introduced in the first compartment; thereby, passage from that compartment becomes time-dependent. This modification is strongly suggested by the physical alteration which certain substances, e.g. hay particles, undergo in the digestive process. The proposed model is applied to experimental data.

Journal ArticleDOI
TL;DR: In this article, the logical structure of the estimatinig equations is emphasised, and it is found to throw light on two classes of problem: what biases will occur if the Seber-Jolly estimates are used in situations in which the standard assumptions do not hold and what new estimates are reasonable in these cases?
Abstract: The estimators derived by Jolly and by Seber for the parameters of an open mobile animal population are maximum likelihood estimators if every live animal has the same probability of surviving until the next sample and the same probability of being observed at that time. The logical structure of the estimatinig equations is emphasised here, and is found to throw light on two classes of problem: what biases will occur if the Seber-Jolly estimates are used in situations in which the standard assumptions do not hold, and what new estimates are reasonable in these cases? The logical argument is applied to various situations including those in which survival is heterogeneous, or correlated with age, and others in which catchability is heterogeneous or affected, permanently or temporarily, by marking.

Journal ArticleDOI
TL;DR: The general linear model approach to the analysis of categorical data described by Grizzle et al.
Abstract: The general linear model approach to the analysis of categorical data described by Grizzle et al. [1969] is extended to situations where: (1) missing data for certain individuals arise at random as a result of non-response or deleted incorrect response; (2) supplemental samples pertaining to various subsets of variables have been obtained due to cost considerations and/or special interest in these variables. The problems discussed are distinct from those involving 'incomplete contingency tables' containing a priori empty cells. The extension is presented through a series of examples which show how the approach can be used to handle a wide variety of non-standard data configurations. Applications to categorical data mixed models and split plot designs are emphasized.

Journal ArticleDOI
TL;DR: In this paper, the analysis of incomplete multi-way cross-classifications is considered and several examples are given, and the methods developed here are examined in the light of these examples.
Abstract: Several authors have recently considered the analysis of contingency tables containing cells which are missing, a priori zero, or otherwise specified. Such tables are usually referred to as being incomplete. This paper reexamines this recent literature and shows how the methodology can be extended to the analysis of incomplete multi-way cross-classifications. Several examples are given, and the methods developed here are examined in the light of these examples. The emphasis is on the use of techniques for the actual analysis of data and on the ties with the analysis of complete multi-way tables.



Journal ArticleDOI

Journal ArticleDOI
TL;DR: In this paper, statistical methods for testing independence in multiway contingency tables which are based on the correspondence between the analysis of factorial experiments and tests of marginal independence are developed, simplifying the interpretation of analyses of contingency tables and leads to simplified tests for tables in which some of the probabilities are constrained to be zero.
Abstract: Statistical methods for testing independence in multiway contingency tables which are based on the correspondence between the analysis of factorial experiments and tests of marginal independence are developed. This relationship simplifies the interpretation of analyses of contingency tables and leads to simplified tests for tables in which some of the probabilities are constrained to be zero. The linear models approach is used to calculate smoothed estimates of probabilities.


Journal ArticleDOI
TL;DR: In this paper, a multifactorial model for the inheritance of disease liability is discussed, where the probability that an individual has the disease depends on the value of some underlying continuous quantity x. The quantity x is assumed to have a genetic component leading to correlations between relatives.
Abstract: The multifactorial model for the inheritance of disease liability (Falconer [1965]) is discussed. In this model, the probability that an individual has the disease depends on the value of some underlying continuous quantity x. The quantity x is assumed to have a genetic component leading to correlations between relatives. For certain family groups, the probabilities of all possible patterns of disease occurrence are shown to be calculable from single integrals involving only univariate Normal density and cumulative distribution functions. Using these probabilities, the recurrence risk for an individual can be calculated from a knowledge of the occurrence of the disease in the family. Relative recurrence risks are tabulated for individuals belonging to families in which there is information on one or both parents, or on two or three full-sibs (or, equivalently, one parent and one or two fullsibs). Recurrence risks in families containing a pair of monozygous twins are also given.


Journal ArticleDOI
TL;DR: This paper examines a method of improving the precision of treatment comparisons by making less expensive measurements on additional experi­ mental animals.
Abstract: The expense of complete dissection limits replication in studies of carcass composition of large meat producing animals. This paper examines a method of improving the precision of treatment comparisons by making less expensive measurements on additional experi­ mental animals. Problems of estimation and hypothesis testing are considered and the distribution of a test statistic is examined by simulation. The allocation of resources between direct and concomitant measurements is discussed.