scispace - formally typeset
Search or ask a question

Showing papers in "Biometrics in 1979"


Journal Article•DOI•
TL;DR: The overall size of the procedure is shown to be controlled with virtually the same accuracy as the single sample chi-square test based on N(m1 + m2) observations and the power is found to bevirtually the same.
Abstract: A multiple testing procedure is proposed for comparing two treatments when response to treatment is both dichotomous (i.e., success or failure) and immediate. The proposed test statistic for each test is the usual (Pearson) chi-square statistic based on all data collected to that point. The maximum number (N) of tests and the number (m1 + m2) of observations collected between successive tests is fixed in advance. The overall size of the procedure is shown to be controlled with virtually the same accuracy as the single sample chi-square test based on N(m1 + m2) observations. The power is also found to be virtually the same. However, by affording the opportunity to terminate early when one treatment performs markedly better than the other, the multiple testing procedure may eliminate the ethical dilemmas that often accompany clinical trials.

2,962 citations


Journal Article•DOI•
TL;DR: In this pathbreaking and far-reaching work George Oster and Edward Wilson provide the first fully developed theory of caste evolution among the social insects and construct a series of mathematical models to characterize the agents of natural selection that promote particular caste systems.
Abstract: In this pathbreaking and far-reaching work George Oster and Edward Wilson provide the first fully developed theory of caste evolution among the social insects. Furthermore, in studying the effects of natural selection in generally increasing the insects' ergonomic efficiency, they go beyond the concentration of previous researchers on the physiological mechanisms of the insects and turn our attention instead to the scale and efficiency of the insects' division of labor.Recognizing that the efficiency of the insect colony is based on a complex fitting of the division of labor to many simultaneous needs, including those imposed by the distribution of resources and enemies around the nest, Professors Oster and Wilson are able to construct a series of mathematical models to characterize the agents of natural selection that promote particular caste systems.The social insects play a key role in the subject of sociobiology because their social organization is so rigid and can be related to genetic evolution. Because of this important consideration, the authors' work has consequences not only for entomology but also for general evolutionary theory.

1,631 citations


Journal Article•DOI•

701 citations


Journal Article•DOI•

549 citations


Journal Article•DOI•

508 citations


Journal Article•DOI•
TL;DR: General use of Burrows' composite measure of linkage disequilibrium, whether or not random union of gametes is an appropriate assumption, is recommended and attention is given to small samples, where the non-normality of gene frequencies will have greatest effect on methods of inference based on normal theory.
Abstract: Existing theory for inferences about linkage disequilibrium is restricted to a measure defined on gametic frequencies. Unless gametic frequencies are directly observable, they are inferred from genotypic frequencies under the assumption of random union of gametes. Primary emphasis in this paper is given to genotypic data, and disequilibrium coefficients are defined for all subsets of two or more of the four genes, two at each of two loci, carried by an individual. Linkage disequilibrium coefficients are defined for genes within and between gametes, and methods of estimating and testing these coefficients are given for gametic data. For genotypic data, when coupling and repulsion double heterozygotes cannot be distinguished. Burrows' composite measure of linkage disequilibrium is discussed. In particular, the estimate for this measure and hypothesis tests based on it are compared to the usual maximum likelihood estimate of gametic linkage disequilibrium, and corresponding likelihood ratio or contingency chi-square tests. General use of the composite measure, whether or not random union of gametes is an appropriate assumption, is recommended. Attention is given to small samples, where the non-normality of gene frequencies will have greatest effect on methods of inference based on normal theory. Even tools such as Fisher's z-transformation for the correlation of gene frequencies are found to perform quite satisfactorily.

487 citations


Journal Article•DOI•

363 citations


Journal Article•DOI•

279 citations



Journal Article•DOI•
TL;DR: A number of generalized models that have been proposed to take the litter effect into account are breifly reviewed and compared to the simpler binomial and Poisson models in this paper.
Abstract: In certain toxicological experiments with laboratory animals, littermate data are frequently encountered. It is generally recognized that one characteristic of this type of data is the "litter effect", i.e., the tendency for animals from the same litter to respond more alike than animals from different litters. In this paper attention is restricted to dichotomous response variables that frequently arise in toxicological studies, such as the occurrence of fetal death or a particular malformation. Various techniques for estimating the underlying probability of response are discussed. A number of generalized models that have recently been proposed to take the litter effect into account are breifly reviewed and compared to the simpler binomial and Poisson models. Various procedures for assessing the significance of treatment-control differences are presented and their relative merits discussed. Finally, future research needs in this area are outlined.

236 citations


Journal Article•DOI•
TL;DR: The nature of multivariate data analysis is discussed in this article, where the authors apply the tools from a geometrical view point to multivariable data analysis and apply them to multivariate analysis.
Abstract: The Nature of Multivariate Data Analysis. Vector and Matrix Operations for Multivariate Analysis. Vector and Matrix Concepts from a Geometric Viewpoint. Linear Transformations from a Geometric Viewpoint. Decomposition of Matrix Transformations: Eigenstructures and Quadratic Forms. Applying the Tools to Multivariate Data. Appendix A: Symbolic Differentiation and Optimization of Multivariable Functions. Appendix B: Linear Equations and Generalized Inverses. Answers to Numerical Problems. References. Index.

Journal Article•DOI•
Peter J. Diggle1•
TL;DR: The objectives of spatial point pattern analsis are discussed with particular reference to the distinction between mapped and sampled data, and a simulation studA of several tests of spatial randonlness is intended to provide some insight into the suitability for model-fitting of various summasy descriptions of a mapped pattern.
Abstract: The paper discusses the objectives of spatial point pattern analsis with particular reference to the distinction between mapped and sampled data. For the former case available nlodels are reviewed briefly the role of preliminary testing is discussed and a procedure for ftting a parametric sslodel is outlined. A simulation studA of several tests of spatial randonlness is intended to provide some insight into the suitability for model-fitting of various summasy descriptions of a mapped pattern. Two examples illustrate the use of the statistical techniques. Soss1e problem areas which-merit further investigation are identified.

Journal Article•DOI•
TL;DR: In this article, a general right censoring and some of the difficulties it creates in the analysis of survival data are discussed, including problems of nonidentifiability that can be encountered when attempting to assess a set of data for the type of censoring in effect, the consequences of falsely assuming that censoring is noninformative, and classes of informative censoring models.
Abstract: This paper concerns general right censoring and some of the difficulties it creates in the analysis of survival data. A general formulation of censored-survival processes leads to the partition of all models into those based on noninformative and informative censoring. Nearly all statistical methods for censored data assume that censoring is noninformative. Topics considered within this class include: the relationships between three models for noninformative censoring, the use of likelihood methods for inferences about the distribution of survival time, the effects of censoring on the K-sample problem, and the effects of censoring on model testing. Also considered are several topics which relate to informative censoring models. These include: problems of nonidentifiability that can be encountered when attempting to assess a set of data for the type of censoring in effect, the consequences of falsely assuming that censoring is noninformative, and classes of informative censoring models.


Journal Article•DOI•
TL;DR: In this paper, the authors provide a practical guide to the various patient assignment methods in clinical trials, including random permuted blocks and the biased coin technique, the extent to which stratification is necessary and the methods available, and the possible benefits of randomization with a greater proportion of patients on a new treatment.
Abstract: This article is intended as a practical guide to the various methods of patient assignment in clinical trials. Topics discussed include a critical appraisal of non-randomized studies, methods of restricted randomization such as random permuted blocks and the biased coin technique, the extent to which stratification is necessary and the methods available, the possible benefits of randomization with a greater proportion of patients on a new treatment, factorial designs, crossover designs, randomized consent designs and adaptive assignment procedures. With all this diversity of approach it needs to be remembered that the effective implementation and reliability of a relatively straightforward randomization scheme may be more important than attempting theoretical optimality with more complex designs.

Journal Article•DOI•
TL;DR: New methods are presented for analyzing repeated binary health measurements of individuals exposed to varying levels of air pollution, involving a separate logistic regression of response against environmental covariates for each individual.
Abstract: New methods are presented for analyzing repeated binary health measurements of individuals exposed to varying levels of air pollution. The methods involve a separate logistic regression of response against environmental covariates for each individual. Parameters reflecting individual susceptibility to pollutants and weather are estimated using Cox's regression techniques (1970, 1972a). The individual parameters are combined to yield summary estimates of environmental effects. The approach does not require independence of successive health measurements. It is illustrated with data on asthma and air pollution in the Los Angeles area.

Journal Article•DOI•
TL;DR: Two popular censored data rank tests were used to compare cancer incidence rates between dogs receiving bone marrow transplantation and control dogs and the Gehan statistic is shown to be subject to a serious criticism that does not apply to other Wilcoxon generalizations.
Abstract: Two popular censvred data rank tests were used to compare cancer incidence rates between dogs receiving bone marrow transplantation and control dogs. The logrank test gave a significance level of 0.01. In contrast, Gehan's generalized Wilcoxon test gave a significance level of 0. 76. The statistics are displayed in a manner that shows how such contrasting significance levels can arise. The Gehan statistic is shown to be subject to a serious criticism that does not apply to other Wilcoxon generalizations. Further insight into the data is obtained using a time-dependent logrank test and the proportional hazards regression model.

Journal Article•DOI•
TL;DR: A synthesis of the literature on estimation from these models under prospective sampling indicates that, although important advances have occurred during the past decade, further effort is warranted on such topics as distribution theory, tests of fit, robustness, and the full utilization of a methodology that permits non-standard features.
Abstract: Many problems, particularly in medical research, concern the relationship between certain covariates and the time to occurrence of an event. The hazard or failure rate function provides a conceptually simple representation of time to occurrence data that readily adapts to include such generalizations as competing risks and covariates that vary with time. Two partially parametric models for the hazard function are considered. These are the proportional hazards model of Cox (1972) and the class of log-linear or accelerated failure time models. A synthesis of the literature on estimation from these models under prospective sampling indicates that, although important advances have occurred during the past decade, further effort is warranted on such topics as distribution theory, tests of fit, robustness, and the full utilization of a methodology that permits non-standard features. It is further argued that a good deal of fruitful research could be done on applying the same models under a variety of other sampling schemes. A discussion of estimation from case-control studies illustrates this point.

Journal Article•DOI•
TL;DR: The major restricted randomization designs and arguments concerning the proper role of stratification are reviewed and the effect of randomization restrictions on the validity of significance tests is discussed.
Abstract: Though therapeutic clinical trials are often categorized as using either "randomization" or "historical controls" as a basis for treatment evaluation, pure random assignment of treatments is rarely employed. Instead various restricted randomization designs are used. The restrictions include the balancing of treatment assignments over time and the stratification of the assignment with regard to covariates that may affect response. Restricted randomization designs for clinical trials differ from those of other experimental areas because patients arrive sequentially and a balanced design cannot be ensured. The major restricted randomization designs and arguments concerning the proper role of stratification are reviewed here. The effect of randomization restrictions on the validity of significance tests is discussed.


Journal Article•DOI•
TL;DR: In this paper, the authors present a design for estimating the magnitude of visibility bias, i.e., the probability of not observing an animal, based on the propensity for animals to occur in groups and, thus, the design considers each possible group size separately.
Abstract: In aerial census data, "visibility bias" is present because the failure to observe all animals can result in severely biased population density estimates. Assuming that the aircraft can accommodate two observers situated on the same side, we present a design for estimating the magnitude of visibility bias, i.e., the probability of not observing an animal. The magnitude of visibility bias depends strongly on the propensity for animals to occur in groups and, thus, the design considers each possible group size separately. Two related methods of inference for the visibility bias parameters and the population total are described. We also discuss the application of the basic design and the associated survey methodology to an aerial survey of white-tailed deer in west-central Alberta.

Journal Article•DOI•
TL;DR: Simple linear Regression Multiple Linear Regression Regression Diagnostics: Detection of Model Violations Qualitative Variables as Predictors Transformation of Variables Weighted Least Squares and more.
Abstract: Simple Linear Regression Multiple Linear Regression Regression Diagnostics: Detection of Model Violations Qualitative Variables as Predictors Transformation of Variables Weighted Least Squares The Problem of Correlated Errors Analysis of Collinear Data Biased Estimation of Regression Coefficients Variable Selection Procedures Logistic Regression Appendix References Index.

Journal Article•DOI•
TL;DR: Predictive probabilities based on the binomial distribution and beta and uniform prior distributions for the bin coefficients are found to be useful as the basis of group sequential designs and size, power and average sample size for these designs are discussed.
Abstract: A phase II clinical trial is designed to gather data to help decide whether an experimental treatment has sufficient effectiveness to justify further study. In a one-arm trial with dichotomous outcome, we wish to test a simple null hypothesis on the Bernoulli parameter against a one-sided alternative in a sample of N patients. It is advisable to have a rule to terminate the trial early when evidence accumulates that the treatment is ineffective. Predictive probabilities based on the binomial distribution and beta and uniform prior distributions for the binomial parameter are found to be useful as the basis of group sequential designs. Size, power and average sample size for these designs are discussed. A process for the specification of an early termination plan, advice on the quantification of prior beliefs, and illustrative examples are included.

Journal Article•DOI•
TL;DR: It is proposed that those characteristics of the data that possess a sneanin
Abstract: The role of comparative bioavailability trials in testing for the bioequivalence of different formulations of a drug is discussed and the statistical aspects of the design and analysis of such trials are reviewed. It is suggested that the design of such trials presents no special problem but that the customary method of analysis, which tests the null hypothesis of no difference between the formulations, is irrelevant to the central purpose of such trials, which is to determine whether the formulations have essentially equivalent therapeutic effects. It is proposed that only those characteristics of the data that possess a meaningful relation to the therapeutic use of the drug should be analysed and also that estimation procedures rather than hypothesis testing techniques should be employed. Several aspects of the statistics of bioavailability trials which require further investigation are listed.

Journal Article•DOI•
TL;DR: In this paper, a goodness-of-fit procedure is developed for testing whether the underlying distribution is a specified function G. The test statistic C is the one-sample limit of Efron's (1967) two-sample statistic W. The comparison are on the basis of applicability, the extent to which the censoring distribution can affect the inference, and power.
Abstract: : For right-censored data, a goodness-of-fit procedure is developed for testing whether the underlying distribution is a specified functions G. The test statistic C is the one-sample limit of Efron's (1967) two-sample statistic W. The test based on C is compared with recently proposed competitors due to Koziol and Green (1976) and Hyde (1977). The comparisons are on the basis of applicability, the extent to which the censoring distribution can affect the inference, and power. It is shown that in certain situations the C test compares favourably with the tests of Koziol-Green and Hyde.




Journal Article•DOI•
TL;DR: In this paper, the survival rates for first-year birds are weather-dependent (and therefore time-speeific) and the survival rate for second year birds is a constant, difterent from the constant annual survival rate assumed for all older birds, to describe the numbers of recoveries of rings from dead Grey Herons, ringed as nestlings in Britain between 1955 and 1975.
Abstract: There is a definite needfor methodology which will produce ageand time-specific survival probabilities from bird ringing studies in which only nestlings are ringed, and one possible approach to this problem is presented in this paper. Models are developed in which the survival rates for first-year birds are weather-dependent (and therefore time-speeific) and the rate for second-year birds is a constant, difterent from the constant annual survival rate assumed for all older birds, to describe the numbers of recoveries of rings from dead Grey Herons, A rdea cinerea, ringed as nestlings in Britain between 1955 and 1975. The models provide a good fit to the data and a useful and concise description of Heron survival. Comparison is made with the predictions of a modelfor heronry census data.

Journal Article•DOI•
TL;DR: A definition and classification of domains is presented to clarify the direction of this review and the existing small domain estimation techniques are split into several distinct approaches and reviewed separately.
Abstract: Timely and complete health, social and economic data can be obtained from samples, but usually only for snajor geographic areas and large subgroups of the population. Small domain estimates are available from censuses, but only infrequently and then only for a few variables. Efffiective planning of health services and other governmental activities cannot depend on traditional data sources, the data must be more current and more complete than these sources provide. Since estimates are needed for a great diversity of domains, a definition and classification of domains is presented to clarify the direction of this review. The existing small domain estimation techniques are split into several distinct approaches and reviewed separately. The basic methodologies of these techniques are presented together with their data requirements and limitations. The existing techniques are briefly assessed in regard to their performance and their potential for further application. Current research approaches are also reviewed and possible lines for future advances are indicated.