scispace - formally typeset
Search or ask a question

Showing papers in "Statistical Methods in Medical Research in 1993"


Journal ArticleDOI
TL;DR: In this article, two models for study-to-study variation in a meta-analysis are presented, critiqued and illustrated, and applied to three summary measures of the effect of an experimental intervention versus a control.
Abstract: Two models for study-to-study variation in a meta-analysis are presented, critiqued and illustrated. One, the fixed effects model, takes the studies being analysed as the universe of interest; the other, the random effects model, takes these studies as representing a sample from a larger population of possible studies. With emphasis on clinical trials, this paper illustrates in some detail the application of both models to three summary measures of the effect of an experimental intervention versus a control: the standardized difference for comparing two means, and the relative risk and odds ratio for comparing two proportions.

1,105 citations


Journal ArticleDOI
TL;DR: The present paper gives a survey about the various estimation methods available for the basic reproduction number Ro, which may vary considerably for different infectious diseases but also for the same disease in different populations.
Abstract: The basic reproduction number R0 is the number of secondary cases which one case would produce in a completely susceptible population. It depends on the duration of the infectious period, the probability of infecting a susceptible individual during one contact, and the number of new susceptible individuals contacted per unit of time. Therefore R0 may vary considerably for different infectious diseases but also for the same disease in different populations. The key threshold result of epidemic theory associates the outbreaks of epidemics and the persistence of endemic levels with basic reproduction numbers greater than one. Because the magnitude of R0 allows one to determine the amount of effort which is necessary either to prevent an epidemic or to eliminate an infection from a population, it is crucial to estimate R0 for a given disease in a particular population. The present paper gives a survey about the various estimation methods available.

737 citations


Journal ArticleDOI
TL;DR: The advent of meta-analysis, especially when performed cumulatively, raises many questions about how best to approach the conduct of clinical trials in the evaluation of new treatments, and meta-analyses must be assured that bias is minimized by proper experimental procedures and that clinical data are presented so that they can be effectively combined in meta- analysis.
Abstract: The advent of meta-analysis, especially when performed cumulatively, raises many questions about how best to approach the conduct of clinical trials in the evaluation of new treatments. We need to ...

115 citations


Journal ArticleDOI
TL;DR: It is concluded that the long-term benefits of serum cholesterol reduction on the risk of heart disease have been seriously underestimated in some previous meta-analyses, while the evidence for adverse effects on other causes of death have been misleadingly exaggerated.
Abstract: There has recently been disagreement in the literature on the results and interpretation of meta-analyses of the trials of serum cholesterol reduction, both in terms of the quantification of the effect on ischaemic heart disease and as regards the evidence of any adverse effect on other causes of death. This paper describes statistical aspects of a recent meta-analysis of these trials, and draws some more general conclusions about the methods used in meta-analysis. Tests of an overall null hypothesis are shown to have a basis clearly distinct from the more extensive assumptions needed to provide an overall estimate of effect. The fixed effect approach to estimation relies on the implausible assumption of homogeneity of treatment effects across the trials, and is therefore likely to yield confidence intervals which are too narrow and conclusions which are too dogmatic. However the conventional random effects method relies on its own set of unrealistic assumptions, and cannot be regarded as a robust solutio...

113 citations


Journal ArticleDOI
TL;DR: Robust methods, scale transforma tion, ascertainment, path diagrams and correlational path models, 'heritability', and the contribution and limitations of statistical modelling to the 'nature-nurture' debate, are discussed.
Abstract: RA Fisher introduced variance components in 1918. He synthesized Mendelian inheritance with Darwin's theory of evolution by showing that the genetic variance of a continuous trait could be decomposed into additive and non-additive components. The model can be extended to include environmental factors, interactions, covariation, and non-random mating. Identifiability depends critically on design. Methods of analysis include modelling the mean squares from a fixed effects analysis of variance, and covariance structure modelling, which can be extended to multivariate traits and has been used to study ordinal traits by reference to postulated, unmeasured, latent 'liabilities'. These methods operate on dependent observa tions within independent groups of the same size and structure, and therefore require balanced designs ('regular' pedigrees). A multivariate normal model handles data in its generic form, utilizes data efficiently from all members of pedigrees of unequal size or varying structure, accommodates ...

86 citations


Journal ArticleDOI
TL;DR: This paper reviews the application of statistical models to outbreaks of two common respiratory viral diseases, measles and influenza, using data provided from work by the authors, largely using Icelandic data.
Abstract: This paper reviews the application of statistical models to outbreaks of two common respiratory viral diseases, measles and influenza. For each disease, we look first at its epidemiological characteristics and assess the extent to which these either aid or hinder modelling. We then turn to the models that have been developed to simulate geographical spread. For measles, a distinction is drawn between process-based and time series models; for influenza, it is the scale of the communities (from small groups to global populations) which primarily determines modelling style. Applications are provided from work by the authors, largely using Icelandic data. Finally we consider the forecasting potential of the models described.

38 citations


Journal ArticleDOI
TL;DR: This paper reviews and develops the applications of Markov chain Monte Carlo methods in pedigree analysis, with particular stress on the Metropolis algorithm.
Abstract: This paper reviews and develops the applications of Markov chain Monte Carlo methods in pedigree analysis, with particular stress on the Metropolis algorithm. In likelihood based genetic analyses, standard deterministic algorithms often fail because of the computational complexity of the observed pedigree data under a proposed genetic model. The new Monte Carlo methods permit approximate maximum likelihood estimation in the presence of such complexity. Monte Carlo implementation of the EM algorithm is the key to successful maximum likelihood analysis. Gibbs sampling and the Metropolis algorithm are alternative ways of defining Markov chains for performing the E step of the EM algorithm. Two applications illustrate the power and simplicity of the Metropolis algorithm. One of these applications involves a discrete model for variance component analysis of quantitative traits; the other application involves a Monte Carlo version of location scores for multipoint linkage analysis.

29 citations


Journal ArticleDOI
B. Devlin1
TL;DR: The likelihood ratio is focused on as a means of summarizing the genetic data for either criminal or civil cases, those being VNTR loci.
Abstract: This review provides an overview of forensic inference from genetic markers. Because the judge and jurors are charged with decision-making, the forensic expert's job is to provide a useful summary of the evidence to the court. Hence, this review focuses on the likelihood ratio as a means of summarizing the genetic data for either criminal or civil cases. The properties of the genetic markers frequently used in today's court cases, those being VNTR loci, are discussed in detail. Unlike traditional markers, the data from VNTR loci are complicated because current molecular methods generate data that follow a finite mixture distribution. Critical ancillary issues are also covered, though not in detail.

28 citations


Journal ArticleDOI
TL;DR: An overview of some models discussed in the literature can indeed be used in assessing detectability of infection, and they indicate that observations may lead to considerable misinterpretation of 'true' prevalences and incidences.
Abstract: The estimation of prevalence and incidence of parasitic infections is considered. As the detectability of such infections is not 100% and may furthermore depend on their intensity, statistical methods are often required to arrive at meaningful results. It appears to be essential to distinguish between parasites that multiply within the (human) host and those that do not. An overview of some models discussed in the literature is presented. These models can indeed be used in assessing detectability of infection, and they indicate that observations may lead to considerable misinterpretation of 'true' prevalences and incidences.

24 citations


Journal ArticleDOI
TL;DR: The term meta-analysis refers to the quantitative combination of data from independent trials, where the result of such combination is a descriptive summary of the weight of the available evidence as discussed by the authors.
Abstract: The term meta-analysis refers to the quantitative combination of data from independent trials. Where the result of such combination is a descriptive summary of the weight of the available evidence, the exercise is of undoubted value. Attempts to apply inferential methods, however, are subject to considerable methodolo gical and logical difficulties. The selection and quality of the trials included, population bias, and the specification of the population to which inference may properly be made are problems to which no satisfactory solutions have been proposed. Insightful quantitative description ought not to differ materially from inferential conclusions; where discrepancies exist the inferential techniques should be regarded with extreme caution.

22 citations


Journal ArticleDOI
TL;DR: This work explains why martingale methods play an important role in statistical inference for parameters of epidemic models, and gives a tutorial introduction to these methods in the more familiar context of data on independent and identically distributed survival times.
Abstract: After explaining why martingale methods play an important role in statistical inference for parameters of epidemic models, we give a tutorial introduction to these methods in the more familiar cont...

Journal ArticleDOI
TL;DR: In this article, the authors discuss ways of acknowledging uncertainty and suggest a Bayesian formulation of the backcalculation idea as a means of combining into a single model both random and systematic variation as well as prior information.
Abstract: The backcalculation method has been extensively used in AIDS modelling and forecasting. Knowledge of reported AIDS cases, information on the time between HIV infection and onset of AIDS, and assumptions on the rate at which infections occurs, can be used to reconstruct the past history of the HIV epidemic, as well as to provide short term predictions of AIDS incidence. Uncertainty in the three components of the backcalculation method and the increasingly available information on HIV prevalence must be taken into account in order to provide realistic projections. In this paper we discuss ways of acknowledging uncertainty and suggest a Bayesian formulation of the backcalculation idea as a means of combining into a single model both random and systematic variation as well as prior information.

Journal ArticleDOI
TL;DR: There is now empirical evidence suggesting error rates in the range 0.1% ∼ 1%, and such rates will affect evolutionary studies since these are about the rates at which DNA sequences from different individuals are expected to differ.
Abstract: Recent developments in the statistical analysis of DNA sequences are reviewed. The pace with which sequence data are being generated and analysed has increased with the growth of the human genome project. Two areas of activity are emphasized: attention to error rates in recorded sequences, and heterogeneity in structure of sequences. There is now empirical evidence suggesting error rates in the range 0.1% ∼ 1%, and such rates will affect evolutionary studies since these are about the rates at which DNA sequences from different individuals are expected to differ. Heterogeneity for such quantities as base composition, or lengths between successive subsequences of specified types, may be sufficient to account for observed long-range correlations between bases. The need for statistical models and analyses of DNA sequence data will continue, and will offer interesting challenges.

Journal ArticleDOI
Pat Lovie1
TL;DR: Fleming and Harrington as discussed by the authors proposed a counting process based approach to survival analysis, which is based on the idea of stopping time in martingale theory and has been shown to represent a very useful way of formalizing the underlying mathematics.
Abstract: like the books of Kalbfleisch and Prenticel and Cox and Oakes,2 there is still a lack both of elementary books and of more advanced ones. Fleming and Harrington’s contribution goes a considerable way to fill the need of an advanced book, although it also contains some practical statistical material. What distinguishes this book from earlier ones, and makes it unique so far, is that it is based on the counting process, or martingale, approach to survival analysis. The last 15 or so years have shown that this approach represents a very useful way of formalizing the underlying mathematics. One achieves a unification of seemingly disparate models and proofs can be carried out both for exact and asymptotic results. One major advantage is that the fundamental concept of censoring is associated with the idea of stopping time in martingale theory. For those not acquainted with martingales it should be noted that they represent a weakening of the idea of independence which so pervades classical statistics. Fleming and Harrington have the ambition to

Journal ArticleDOI
TL;DR: In this paper, Yagamuchi et al. describe hazard, survivor, and likelihood functions and also give an appropriately prominent discussion of censoring, with a largely consistent structure being divided into ‘Methods and models, Application, Concluding remarks and Problems’.
Abstract: I am frequently asked to recommend books that cover some of the more fashionable methods of data analysis but that have been written for nonstatisticians. In responding I often find it necessary to turn to books intended for social rather than medical scientists because, though their quality may be mixed, there seem to be so many more of them and they are generally pitched at an appropriate level. A quick scan of the contents pages of Event History Analysis by Kazuo Yagamuchi immediately suggests it as a book that promises to fulfil such a role. There is an introductory chapter that describes and defines hazard, survivor and likelihood functions and also gives an appropriately prominent discussion of censoring. There are a couple of chapters on discrete time models using logistic regression and another using log-linear models. There are two chapters on Cox models, the second involving time dependent covariates. Finally, a chapter with odds and ends thought useful to consider before ’getting one’s own research started’. The chapters have a largely consistent structure being divided into ’Methods and models, Application, Concluding remarks and Problems’. Perhaps the single greatest strength of this book is that the appplications are generally complete, with a full description of how to prepare the input data and how to get SAS or SPSS, programs that are not renowned for their abilities with event history data, to do the analysis. It also gives some details of how

Journal ArticleDOI
TL;DR: The application of linear structural equations models to the analysis of twin and family data, with the aim of unravelling the genetic and environmental sources of human variation is discussed in this article.
Abstract: This book is about the application of linear structural equations models to the analysis of twin and family data, with the aim of unravelling the genetic and environmental sources of human variation. It is based on the very successful one-week workshops, first held in 1987, designed to teach these analytical methods to twin researchers from diverse backgrounds. The many contributors to this volume are international experts responsible for developing much of this emerging methodology. The approach is an extension of biometrical genetics, pioneered by Fisher, and path analysis, developed by Wright. Fisher’s seminal 1918 paper derived the theoretical correlations between relatives for traits determined by a large number of Mendelian genes (polygenes). This enabled him to partition the trait variance into genetic and environmental components. Meanwhile, Wright developed a systematic method for working out the correlations between variables in a linear model of latent and

Journal ArticleDOI
Graham Dunn1
TL;DR: The disadvantage of both groups of books, however, is that they inevitably will not describe the analysis and interpretation of the right data as discussed by the authors, which Biochemists (and many others!) need to do.
Abstract: include. On the one hand there are many elementary textbooks covering much of the essential background and, on the other, there are also several applied statistics books covering the more advanced and specialist topics. The disadvantage of both groups of books, however, is that they inevitably will not describe the analysis and interpretation of the right data. Biochemists (and many others!) need

Journal ArticleDOI
TL;DR: In this paper, Yagamuchi et al. describe hazard, survivor, and likelihood functions and also give an appropriately prominent discussion of censoring, with a largely consistent structure being divided into ‘Methods and models, Application, Concluding remarks and Problems’.
Abstract: I am frequently asked to recommend books that cover some of the more fashionable methods of data analysis but that have been written for nonstatisticians. In responding I often find it necessary to turn to books intended for social rather than medical scientists because, though their quality may be mixed, there seem to be so many more of them and they are generally pitched at an appropriate level. A quick scan of the contents pages of Event History Analysis by Kazuo Yagamuchi immediately suggests it as a book that promises to fulfil such a role. There is an introductory chapter that describes and defines hazard, survivor and likelihood functions and also gives an appropriately prominent discussion of censoring. There are a couple of chapters on discrete time models using logistic regression and another using log-linear models. There are two chapters on Cox models, the second involving time dependent covariates. Finally, a chapter with odds and ends thought useful to consider before ’getting one’s own research started’. The chapters have a largely consistent structure being divided into ’Methods and models, Application, Concluding remarks and Problems’. Perhaps the single greatest strength of this book is that the appplications are generally complete, with a full description of how to prepare the input data and how to get SAS or SPSS, programs that are not renowned for their abilities with event history data, to do the analysis. It also gives some details of how

Journal ArticleDOI
Odd O. Aalen1
TL;DR: This is a new, improved edition of a book which has achieved classic, some would say cult, status and aims to enable students to find out for themselves rather than to create the floppy disk doctor who is crammed with facts.
Abstract: This is a new, improved edition of a book which has achieved classic, some would say cult, status. Sackett joined the new Medical School at McMaster University in 1967 where he later teamed up with the other authors. This book is just the most visible aspect of an attitude towards medical education which aims to enable students to find out for themselves rather than to create the floppy disk doctor who is crammed with facts. Instead of