scispace - formally typeset
Search or ask a question

Showing papers in "Biometrics in 1993"


Journal ArticleDOI
TL;DR: In this article, the authors present a model for renewable-resource harvesting based on the Schaefer model with a focus on the one-dimensional control problem and its application to policy problems.
Abstract: Introduction. 1. Elementary Dynamics of Exploited Populations. 1.1 The Logistic Growth Model. 1.2 Generalized Logistic Models: Depensation. 1.3 Summary and Critique. 2. Economic Models of Renewable-Resource Harvesting. 2.1 The Open-Access Fishery. 2.2 Economic Overfishing. 2.3 Biological Overfishing. 2.4 Optimal Fishery Management. 2.5 The Optimal Harvest Policy. 2.6 Examples Based on the Schaefer Model. 2.7 Linear Variational Problems. 2.8 The Possibility of Extinction. 2.9 Summary and Critique. 3. Capital-Theoretic Aspects of Resource Management. 3.1 Interest and Discount Rates. 3.2 Capital Theory and Renewable Resources. 3.3 Nonautonomous Models. 3.4 Applications to Policy Problems: Labor Mobility in the Fishery. 4. Optimal Control Theory. 4.1 One-Dimensional Control Problems. 4.2 A Nonlinear Fishery Model. 4.3 Economic Interpretation of the Maximum Principle. 4.4 Multidimensional Optimal Control Problem. 4.5 Optimal Investment in Renewable-Resource Harvesting. 5. Supply and Demand: Nonlinear Models. 5.1 The Elementary Theory of Supply and Demand. 5.2 Supply and Demand in Fisheries. 5.3 Nonlinear Cost Effects: Pulse Fishing. 5.4 Game-Theoretic Models. 5.5 Transboundary Fishery Resources: A Further Application of the Theory. 5.6 Summary and Critique. 6. Dynamical Systems. 6.1 Basic Theory. 6.2 Dynamical Systems in the Plane: Linear Theory. 6.3 Isoclines. 6.4 Nonlinear Plane-Autonomous Systems. 6.5 Limit Cycles. 6.6 Gause's Model of Interspecific Competition. 7. Discrete-Time and Metered Models. 7.1 A General Metered Stock-Recruitment Model. 7.2 The Beverton-Holt Stock-Recruitment Model. 7.3 Depensation Models. 7.4 Overcompensation. 7.5 A Simple Cohort Model. 7.6 The Production Function of a Fishery. 7.7 Optimal Harvest Policies. 7.8 The Discrete Maximum Principle. 7.9 Dynamic Programming. 8. The Theory of Resource Regulation. 8.1 A Behavioral Model. 8.2 Optimization Analysis. 8.3 Limited Entry. 8.4 Taxes and Allocated Transferable Quotas. 8.5 Total Catch Quotas. 8.6 Summary and Critique. 9. Growth and Aging. 9.1 Forestry Management: The Faustmann Model. 9.2 The Beverton-Holt Fisheries Model. 9.3 Dynamic Optimization in the Beverton-Holt Model. 9.4 The Case of Bounded F. 9.5 Multiple, Cohorts: Nonselective Gear. 9.6 Pulse Fishing. 9.7 Multiple Cohorts: Selective Gear. 9.8 Regulation. 9.9 Summary and Critique. 10. Multispecies Models. 10.1 Differential Productivity. 10.2 Harvesting Competing Populations. 10.3 Selective Harvesting. 10.4 A Diffusion Model: The Inshore-Offshore Fishery. 10.5 Summary and Critique. 11. Stochastic Resource Models. 11.1 Stochastic Dynamic Programming. 11.2 A Stochastic Forest Rotation Model. 11.3 Uncertainty and Learning. 11.4 Searching for Fish. 11.5 Summary and Critique. Supplementary Reading. References. Index.

2,744 citations


Journal ArticleDOI
TL;DR: The classification maximum likelihood approach is sufficiently general to encompass many current clustering algorithms, including those based on the sum of squares criterion and on the criterion of Friedman and Rubin (1967), but it is restricted to Gaussian distributions and it does not allow for noise.
Abstract: : The classification maximum likelihood approach is sufficiently general to encompass many current clustering algorithms, including those based on the sum of squares criterion and on the criterion of Friedman and Rubin (1967). However, as currently implemented, it does not allow the specification of which features (orientation, size and shape) are to be common to all clusters and which may differ between clusters. Also, it is restricted to Gaussian distributions and it does not allow for noise. We propose ways of overcoming these limitations. A reparameterization of the covariance matrix allows us to specify that some features, but not all, be the same for all clusters. A practical framework for non-Gaussian clustering is outlined, and a means of incorporating noise in the form of a Poisson process is described. An approximate Bayesian method for choosing the number of clusters is given. The performance of the proposed methods is studied by simulation, with encouraging results. The methods are applied to the analysis of a data set arising in the study of diabetes, and the results seem better than those of previous analyses. (RH)

2,336 citations



Journal ArticleDOI
Rick Durrett1
TL;DR: In this paper, a comprehensive introduction to probability theory covering laws of large numbers, central limit theorem, random walks, martingales, Markov chains, ergodic theorems, and Brownian motion is presented.
Abstract: This book is an introduction to probability theory covering laws of large numbers, central limit theorems, random walks, martingales, Markov chains, ergodic theorems, and Brownian motion. It is a comprehensive treatment concentrating on the results that are the most useful for applications. Its philosophy is that the best way to learn probability is to see it in action, so there are 200 examples and 450 problems.

1,008 citations


Journal ArticleDOI

956 citations


Journal ArticleDOI
TL;DR: In this article, the variance of the sample covariance is computed for a finite number of locations, under the multinormality assumption, and the mathematical derivation of the definition of effective sample size is given.
Abstract: Clifford, Richardson, and Hm they require the estimation of an effective sample size that takes into account the spatial structure of both processes. Clifford et al. developed their method on the basis of an approximation of the variance of the sample correlation coefficient and assessed it by Monte Carlo simulations for lattice and non-lattice networks of moderate to large size. In the present paper, the variance of the sample covariance is computed for a finite number of locations, under the multinormality assumption, and the mathematical derivation of the definition of effective sample size is given. The theoretically expected number of degrees of freedom for the modified t test with renewed modifications is compared with that computed on the basis of equation (2.9) of Clifford et al. (1989). The largest differences are observed for small numbers of locations and high autocorrelation, in particular when the latter is present with opposite sign in the two processes. Basic references that were missing in Clifford et al. (1989) are given and inherent ambiguities are discussed.

847 citations


Journal ArticleDOI
TL;DR: In this paper, the authors consider capture-recapture studies where release and recapture data are available from each of a number of strata on every capture occasion, and suggest an analysis based on a conditional likelihood approach.
Abstract: We consider capture-recapture studies where release and recapture data are available from each of a number of strata on every capture occasion. Strata may, for example, be geographic locations or physiological states. Movement of animals among strata occurs with unknown probabilities, and estimation of these unknown transition probabilities is the objective. We describe a computer routine for carrying out the analysis under a model that assumes Markovian transitions and under reducedparameter versions of this model. We also introduce models that relax the Markovian assumption and allow "memory" to operate (i.e., allow dependence of the transition probabilities on the previous state). For these models, we suggest an analysis based on a conditional likelihood approach. Methods are illustrated with data from a large study on Canada geese (Branta canadensis) banded in three geographic regions. The assumption of Markovian transitions is rejected convincingly for these data, emphasizing the importance of the more general models that allow memory.

708 citations


Journal ArticleDOI
TL;DR: This work provides maximum likelihood estimators for the attributable fraction in cohort and case-control studies, and their asymptotic variances, and presents a limited simulation study which confirms earlier work that better small-sample performance is obtained when the confidence interval is centered on the log-transformed point estimator rather than the original point estimate.
Abstract: Bruzzi et al. (1985, American Journal of Epidemiology 122, 904-914) provided a general logistic-model-based estimator of the attributable fraction for case-control data, and Benichou and Gail (1990, Biometrics 46, 991-1003) gave an implicit-delta-method variance formula for this estimator. The Bruzzi et al. estimator is not, however, the maximum likelihood estimator (MLE) based on the model, as it uses the model only to construct the relative risk estimates, and not the covariate-distribution estimate. We here provide maximum likelihood estimators for the attributable fraction in cohort and case-control studies, and their asymptotic variances. The case-control estimator generalizes the estimator of Drescher and Schill (1991, Biometrics 47, 1247-1256). We also present a limited simulation study which confirms earlier work that better small-sample performance is obtained when the confidence interval is centered on the log-transformed point estimator rather than the original point estimator.

532 citations


Journal ArticleDOI
TL;DR: In this paper, a simple matrix extension of the formulation of a tag-recovery experiment discussed by Brownie et al. is used to estimate the migration of Pacific herring among spawning grounds off the west coast of Canada.
Abstract: Tag-recovery data are used to estimate migration rates among a set of strata. The model formulation is a simple matrix extension of the formulation of a tag-recovery experiment discussed by Brownie et al. (1985, Statistical Inference from Band-Recovery Data-A Handbook, 2nd edition, Washington, D.C.: U.S. Department of the Interior). Estimation is more difficult because of the convolution of parameters between release and recovery and this convolution may cause estimates of the survival/ migration parameters to have low precision. Derived parameters of emigration, immigration, harvest derivation, and overall net survival are also estimated. The models are applied to estimate the migration of Pacific herring among spawning grounds off the west coast of Canada. If animals can be re-released after being recaptured, the model corresponds, in its migration/survival components, to that of Arnason (1972, Researches in Popiulation Ecology 13, 97-113). This correspondence is developed, leading to more efficient estimators of these parameters.

386 citations


Journal ArticleDOI
TL;DR: In this article, a comparison between the propensity score and prognostic models in estimating treatment effects from observational studies was carried out and the effect of estimating the propensity scores on estimators of treatment effect was investigated.
Abstract: A comparison was carried out between the propensity score and prognostic models in estimating treatment effects from observational studies. One issue investigated was the effect of estimating the propensity score on estimators of treatment effect. A second question addressed comparisons of the propensity score and prognostic approach when a confounder is omitted. Third, misspecifications of the propensity score were compared to messpecified response models. In all cases there were two types of models, one involving a continuous and one a binary response.

353 citations


Journal ArticleDOI
TL;DR: In this article, one way analysis of variance, J.C. Fry crossed and hierarchical analysis for variance, T. Iles ordination, P.D. Dunstan, D.W. Bowker compartment models, R.G. Wiegert.
Abstract: Part 1 Statistics: one way analysis of variance, J.C Fry crossed and hierarchical analysis of variance, T. Iles bivariate regression, J.C. Fry multiple regression, T. Iles ordination, P.F. Randerson classification, P.D. Bridge time series analysis, F.D.J. Dunstan. Part 2 Modelling: dynamic models of homogeneous systems, D.W. Bowker compartment models, R.G. Wiegert. Appendices: Software packages statistical tables.

Journal ArticleDOI
TL;DR: Partial table of contents: FUNDAMENTALS Instantaneous Frequency and Time-Frequency Distributions, Techniques and Limitations of Spectrum Analysis with the DFT, Applications of Time-Varying Filtering and Signal Synthesis.
Abstract: Partial table of contents: FUNDAMENTALS Instantaneous Frequency and Time-Frequency Distributions (B. Boashash & G. Jones) Reduced Interference Time-Frequency Distributions (W. Williams & J. Jeong) Instantaneous Bandwidth (L. Cohen & C. Lee) Wideband Time-Frequency Distributions (J. Speiser, et al.) METHODS FOR NON-STATIONARY RANDOM PROCESSES Techniques and Limitations of Spectrum Analysis with the DFT (F. Harris) Estimation of Instantaneous Frequency of a Noisy Signal (L. White) METHODS FOR SIGNAL DETECTION AND CLASSIFICATION Signal Detection Using Time-Frequency Analysis (B. Boashash & P. O'Shea) APPLICATIONS OF TIME-FREQUENCY DISTRIBUTIONS Time-Varying Filtering and Signal Synthesis (J. Jeong & W. Williams) Time-Frequency Analysis in Machine Fault Detection (B. Forrester) Wigner Distribution Formulation of the Crystallography Problem (C. Frishberg) Bibliography Index.

Journal ArticleDOI
TL;DR: In this paper, a method of estimation for generalised mixed models is applied to the estimation of regression parameters in proportional hazards models for failure times when there are repeated observations of failure on each subject.
Abstract: A method of estimation for generalised mixed models is applied to the estimation of regression parameters in proportional hazards models for failure times when there are repeated observations of failure on each subject. The subject effect is incorporated into the model as a random frailty term. Best linear unbiased predictors are used as an initial step in the computation of maximum likelihood and residual maximum likelihood estimates.

BookDOI
TL;DR: In this article, a Scientific Approach to the Design of Experiments One Factor Designs Factorial Designs Nested Designs Restrictions on Randomization Play it Again, Sam Two Level Fractional Designs Other Fractionals Designs Response Surface Designs Appendices Key Word Index
Abstract: "A Scientific Approach to the Design of Experiments One Factor Designs Factorial Designs Nested Designs Restrictions on Randomization Play it Again, Sam Two Level Fractional Designs Other Fractional Designs Response Surface Designs Appendices Key Word Index "


Journal ArticleDOI
TL;DR: This paper extends the estimating equations initially developed for clustered discrete data, and subsequently extended by Prentice, to polytomous response variables, and provides a formal framework for obtaining iterated weighted least squares model parameter estimates.
Abstract: In recent years, methods have been developed for modelling repeated observations of a categorical response obtained over time on the same individual. Although situations in which the repeated response is binary or Poisson have been studied extensively, relatively little attention has been given to polytomous categorical response variable. In this paper, we extend the estimating equations initially developed for clustered discrete data by Liang and Zeger (1986, Biometrika 73, 13-22), and subsequently extended by Prentice (1988, Biometrics 44, 1033-1048), to polytomous response variables. Under certain assumptions, we illustrate that these estimating equations simplify to the weighted least squares (WLS) equations formalized by Koch et al. (1977, Biometrics 33, 133-158). This connection provides a formal framework for obtaining iterated weighted least squares model parameter estimates. Cumulative logit models are developed and applied to a representative longitudinal data set. Simulation results comparing WLS, an iterative form of WLS, and independence estimating equations using a robust estimate of the variance are presented.

Journal ArticleDOI
TL;DR: Three properties of interest in bioavailability studies using compartmental models are the area under the concentration curve, the maximum concentration, and the time to maximum concentration.
Abstract: SUMMARY Three properties of interest in bioavailability studies using compartmental models are the area under the concentration curve, the maximum concentration, and the time to maximum concentration Methods are described for finding designs that minimize the variance of the estimates of these quantities in such a model These methods use prior information Both prior estimates and prior distributions are used The designs for an open one-compartment model are compared with the corresponding D,-optimum design for all parameters and also with designs that minimize the sum of the scaled variances of the individual properties

Journal ArticleDOI
TL;DR: The analysis of data from an observational study of zidovudine in patients with the acquired immunodeficiency syndrome (AIDS) is presented, showing the impact of cross-sectional sampling criterion on the analysis of prevalent cohort data.
Abstract: In prospective cohort studies individuals are sometimes recruited according to a certain cross-sectional sampling criterion. A prevalent cohort is defined as a group of individuals who have a certain disease at enrollment into the study. Statistical models for the analysis of prevalent cohort data are considered when the onset or diagnosis time of the disease is known. The incident proportional hazards model, where the time scale is duration with disease, is compared to the prevalent proportional hazards model, where the fundamental time scale is follow-up time. In certain cases the time of enrollment may coincide with another event (such as the initiation of treatment). This situation is also considered and its limitations highlighted. To illustrate the methodological ideas discussed in the paper, the analysis of data from an observational study of zidovudine (ZVD) in patients with the acquired immunodeficiency syndrome (AIDS) is presented.

Journal ArticleDOI
TL;DR: A formulation of the bivariate testing problem is presented, group sequential tests that satisfy Type I error conditions are introduced, and how to find the sample size guaranteeing a specified power is described.
Abstract: We describe group sequential tests for a bivariate response. The tests are defined in terms of the two response components jointly, rather than through a single summary statistic. Such methods are appropriate when the two responses concern different aspects of a treatment; for example, one might wish to show that a new treatment is both as effective and as safe as the current standard. We present a formulation of the bivariate testing problem, introduce group sequential tests that satisfy Type I error conditions, and show how to find the sample size guaranteeing a specified power. We describe how properties of group sequential tests for bivariate normal observations can be computed by numerical integration.

BookDOI
TL;DR: Papers presented at a workshop held January 1990 (location unspecified) cover just about all aspects of solving Markov models numerically.
Abstract: Papers presented at a workshop held January 1990 (location unspecified) cover just about all aspects of solving Markov models numerically. There are papers on matrix generation techniques and generalized stochastic Petri nets; the computation of stationary distributions, including aggregation/disagg

Journal ArticleDOI
TL;DR: In this paper, the authors discuss the use of medical statistics in medical decision-making and use them in the design of observational studies, including the Randomized Controlled Trial (RCT).
Abstract: Uses and Abuses of Medical Statistics. Design. Probability and Decision Making. Data Description. From Sample to Population. Statistical Inference. Correlation and Linear Regression. The Randomized Controlled Trial. Designed Observational Studies. Common Pitfalls in Medical Statistics. Appendices. References. Statistical Tables. Index.

Journal ArticleDOI
TL;DR: The problem of regression dilution arising from covariate measurement error is investigated for survival data using the proportional hazards model and a relationship between the estimated parameter in large samples and the true parameter is obtained showing that the bias does not depend on the form of the baseline hazard function when the errors are normally distributed.
Abstract: The problem of regression dilution arising from covariate measurement error is investigated for survival data using the proportional hazards model. The naive approach to parameter estimation is considered whereby observed covariate values are used, inappropriately, in the usual analysis instead of the underlying covariate values. A relationship between the estimated parameter in large samples and the true parameter is obtained showing that the bias does not depend on the form of the baseline hazard function when the errors are normally distributed. With high censorship, adjustment of the naive estimate by the factor 1 + lambda, where lambda is the ratio of within-person variability about an underlying mean level to the variability of these levels in the population sampled, removes the bias. As censorship increases, the adjustment required increases and when there is no censorship is markedly higher than 1 + lambda and depends also on the true risk relationship.

Journal ArticleDOI
TL;DR: The development of the SAS?
Abstract: 1. The Development of PROC MULTTEST While controversial, the use of multiplicity adjustments has gained acceptance recently in varied fields of scientific endeavor and their associated publications. Multiplicity adjustments have been considered important in pharmaceutical safety determinations involving multiple endpoints, such as in adverse events analysis of clinical trials, and in animal carcinogenicity studies. In situations where the compound tested is completely safe, it is likely to observe false positive indications of one or more untoward outcomes when unadjusted testing methods are used. Multiplicity adjustment is also important in epidemiology and other complicated areas of data analysis, as it offers protection against conclusions that are driven by excessive data mining. Multiplicity concerns in toxicology and clinical trials prompted the development of the SAS? procedure PROC MULTTEST which calculates adjusted P-values for a user-supplied family of tests in a wide variety of applications. Biometrics readers will be interested to know that this software is readily available to calculate many (but not all) of the multiplicity-adjusted P-values described by Wright (1992). In addition, the software incorporates many improvements and enhancements, such as the ability to incorporate correlations and nonnormal distributions. The development of PROC MULTTEST started in May 1987. One of us (Westfall) presented a resampling approach to calculating adjusted P-values for multiple tests in multivariate binomial models, with special application to the animal carcinogenicity problem. The talk was entitled "Multivariate Binomial Testing," and was given as the keynote address for the Midwestern Biopharmaceutical Statistics Workshop (MBSW) held in Muncie, Indiana. Similar approaches to the same problem were developed concurrently and independently, all of which have been published (Farrar and Crump, 1988; Heyse and Rom, 1988; Westfall, and Young, 1989). [An interesting Biometrics connection should be mentioned here. Dr Young thought that Westfall's (1985) publication concerning simultaneous inference with multivariate binary data might be applied to the multiplicity problem in animal carcinogenicity studies. Dr Young therefore invited Dr Westfall to speak at the MBSW conference on this application.] The initial precursor to PROC MULTTEST, called PROC MBIN, was developed in 1988. We wrote the specifications for the software, and the coding was performed by Youling Lin at Texas Tech University. Under the auspices of the Pharmaceutical Manufacturing Association, a consortium of drug companies offered financial and intellectual support for the project; specific individuals and companies who contributed are listed below in the Acknowledgements section. The development proceeded through a series of meetings at national and regional statistics conferences; attendees included representatives from the funding organizations and from the United States Food and Drug Administration. From the outset, it was decided that the primary form of output from the software would be the adjusted P-value, for the reasons Wright describes in his opening paragraph. The procedure PROC MBIN was described by Westfall, Lin, and Young (1989) and was donated to SAS Institute Inc. The software performed multiplicity adjustments for multiple tests (z-score or exact permutation tests) in multivariate (possibly stratified), multiple group binary outcome situations. The main feature of the output of this software was the use of adjusted P-values to summarize many statistical tests. Single-step resampling (bootstrap or permutation), Bonferroni, and Sidak methods were used to calculate the adjustments. The tabular form of the output was much like Tables 2 and 3 of Wright, showing unadjusted P-values (which we called "raw" P-values) side-by-side with various types of adjusted P-values.

Journal ArticleDOI
TL;DR: This method generalizes the one-sample estimation results of De Gruttola and Lagakos by allowing the distribution of time between the two events to be a function of covariates under a proportional hazards model.
Abstract: This paper proposes a method for incorporating covariate information in the analysis of survival data when both the time of the originating event and the failure event can be right- or interval-censored. This method generalizes the one-sample estimation results of De Gruttola and Lagakos (1989, Biometrics 45, 1-11) by allowing the distribution of time between the two events to be a function of covariates under a proportional hazards model. Estimates for the model coefficients, as well as the underlying distributions, are obtained by an iterative fitting procedure based on Turnbull's (1976, Journal of the Royal Statistical Society, Series B 38, 290-295) self-consistency algorithm in combination with the Newton-Raphson algorithm. The method is illustrated with data from a study of hemophiliacs infected with the human immunodeficiency virus.

Journal ArticleDOI
TL;DR: Simulation is used to generate critical values and sequences of nominal significance levels for the approximate likelihood ratio test, which is not normally distributed.
Abstract: This paper considers some methods for reducing the number of significance tests undertaken when analyzing and reporting results of clinical trials. Emphasis is placed on designing and analyzing clinical trials to examine a composite hypothesis concerning multiple endpoints and combining this multiple endpoint methodology with group sequential methodology. Four methods for composite hypotheses are considered: an ordinary least squares and a generalized least squares approach both due to O'Brien (1984, Biometrics 40, 1079-1087), a new modification of these, and an approximate likelihood ratio test, due to Tang, Gnecco, and Geller (1989, Biometrika 76, 577-583). These are extended for group sequential use. In particular, simulation is used to generate critical values and sequences of nominal significance levels for the approximate likelihood ratio test, which is not normally distributed. An example is given and the relative merits of the suggested approaches are discussed.

Journal ArticleDOI
TL;DR: In this paper, the authors consider linear blocks with border plots, in which a treatment may affect the response on the two adjacent plots, and three series of designs are given: (i) neighbour-balanced designs in complete blocks; (ii) neighbourbalanced designs with border blocks each of which lacks one treatment; (iii) partially neighbor-balanced design in few complete blocks.
Abstract: We consider designs in linear blocks with border plots, in which a treatment may affect the response on the two adjacent plots. Three series of designs are given: (i) neighbour-balanced designs in complete blocks; (ii) neighbour-balanced designs in blocks each of which lacks one treatment; (iii) partially neighbour-balanced designs in few complete blocks. Complete methods of construction and randomization are given. Optimality properties are discussed.

Journal ArticleDOI
TL;DR: A class of quasi-likelihood score tests for multiple binary outcomes is derived, and it is shown that special cases of this class correspond to other tests that have been proposed, and that these tests can maintain surprisingly high efficiency when the outcomes of interest are rare.
Abstract: The applied statistician often encounters the need to compare two or more groups with respect to more than one outcome or response. Several options are generally available, including reducing the dimension of the problem by averaging or summarizing the outcomes, using Bonferroni or other adjustments for multiple comparisons, or applying a global test based on a suitable multivariate model. For normally distributed data, it is well established that global tests tend to be significantly more sensitive than other procedures. While global tests have also been proposed for multiple binary outcomes, their properties have not been well studied nor have they been widely discussed in the context of clustered data. In this paper, we derive a class of quasi-likelihood score tests for multiple binary outcomes, and show that special cases of this class correspond to other tests that have been proposed. We discuss extensions to allow for clustered data, and compare the results to the simple approach of collapsing the data to a single binary outcome, indicating the presence or absence of at least one response. The asymptotic relative efficiencies of the tests are shown to depend not only on the correlation between the outcomes, but also on the response probabilities. Although global tests based on a multivariate model are generally recommended, our findings suggest that a test based on the collapsed data can maintain surprisingly high efficiency, especially when the outcomes of interest are rare. Data from several developmental toxicity studies illustrate our results.

Journal ArticleDOI
TL;DR: Although the delta method approach is slightly more computationally intensive, small-sample simulations indicate that it has superior operating characteristics over the Poly-3 trend test of Bailer and Portier when background tumor incidence rates are low and survival patterns differ markedly across treatments.
Abstract: This paper demonstrates the use of the delta method for estimating the variance of ratio statistics derived from animal carcinogenicity experiments. The Cochran-Armitage test (Cochran, 1954, Biometrika 10, 417-451; and Armitage, 1955, Biometrics 11, 375-386) is routinely applied to carcinogenicity data as a test for linear trend in lifetime tumor incidence rates. The computing formula for this test derives from the assumption that the denominators of the quantal response rates are fixed. However, when time-at-risk weights are introduced to correct for treatment-related differences in survival, the denominators of the quantal response rates are subject to random variation. The delta method and weighted least squares techniques are applied here to approximate the variance of such ratio statistics and test for a linear dose-response relationship among treatments. This technique is compared to that of Bailer and Portier (1988, Biometrics 44, 417-431), who introduced a survival-adjusted quantal response test for trend in lifetime tumor incidence rates. Their test modifies the usual Cochran-Armitage computing formula by weighting the denominators of the response rates to reflect less-than-whole-animal contributions to risk. Within the framework of a weighted least squares linear regression model that underlies the Cochran-Armitage test, the time-at-risk weights of Bailer and Portier are incorporated using the delta method. Although the delta method approach is slightly more computationally intensive, small-sample simulations indicate that it has superior operating characteristics over the Poly-3 trend test of Bailer and Portier when background tumor incidence rates are low (under 3%) and survival patterns differ markedly across treatments.(ABSTRACT TRUNCATED AT 250 WORDS)

Journal ArticleDOI
TL;DR: A generalisation of Laird and Ware's linear random-effects model to accommodate multiple random effects is proposed, and it is shown how Gibbs sampling can be used to estimate it.
Abstract: Analysis of longitudinal studies is often complicated through differences amongst individuals in the number and spacing of observations. Laird and Ware (1982, Biometrics 38, 963-974) proposed a linear random-effects model to deal with this problem. We propose a generalisation of this model to accommodate multiple random effects, and show how Gibbs sampling can be used to estimate it. We illustrate the methodology with an analysis of long-term response to hepatitis B vaccination, and demonstrate that the methodology can be easily and effectively extended to deal with censoring in the dependent variable.

Journal ArticleDOI
TL;DR: A reanalysis of the Stanford Heart Transplant Data reveals significant evidence that censoring of pretransplant survival times by transplantation was nonignorable, suggesting a greater benefit from cardiac transplantation than previous analyses had found.
Abstract: Heitjan and Rubin (1991, Annals of Statistics 19, 2244-2253) define data to be "coarse" when one observes not the exact value of the data but only some set (a subset of the sample space) that contains the exact value. This definition covers a number of incomplete-data problems arising in biomedicine, including rounded, heaped, censored, and missing data. In analyzing coarse data, it is common to proceed as though the degree of coarseness is fixed in advance--in a word, to ignore the randomness in the coarsening mechanism. When coarsening is actually stochastic, however, inferences that ignore this randomness may be seriously misleading. Heitjan and Rubin (1991) have proposed a general model of data coarsening and established conditions under which it is appropriate to ignore the stochastic nature of the coarsening. The conditions are that the data be coarsened at random [a generalization of missing at random (Rubin, 1976, Biometrika 63, 581-592)] and that the parameters of the data and the coarsening process be distinct. This article presents detailed applications of the general model and the ignorability conditions to a variety of coarse-data problems arising in biomedical statistics. A reanalysis of the Stanford Heart Transplant Data (Crowley and Hu, 1977, Journal of the American Statistical Association 72, 27-36) reveals significant evidence that censoring of pretransplant survival times by transplantation was nonignorable, suggesting a greater benefit from cardiac transplantation than previous analyses had found.