scispace - formally typeset
Search or ask a question
Author

David Cox

Bio: David Cox is an academic researcher from Nuffield College. The author has contributed to research in topics: Population & Regression analysis. The author has an hindex of 118, co-authored 588 publications receiving 134615 citations. Previous affiliations of David Cox include Newcastle University & United Kingdom Department for Environment, Food and Rural Affairs.


Papers
More filters
Book ChapterDOI
TL;DR: The analysis of censored failure times is considered in this paper, where the hazard function is taken to be a function of the explanatory variables and unknown regression coefficients multiplied by an arbitrary and unknown function of time.
Abstract: The analysis of censored failure times is considered. It is assumed that on each individual arc available values of one or more explanatory variables. The hazard function (age-specific failure rate) is taken to be a function of the explanatory variables and unknown regression coefficients multiplied by an arbitrary and unknown function of time. A conditional likelihood is obtained, leading to inferences about the unknown regression coefficients. Some generalizations are outlined.

28,264 citations

Journal ArticleDOI
TL;DR: In this article, Lindley et al. make the less restrictive assumption that such a normal, homoscedastic, linear model is appropriate after some suitable transformation has been applied to the y's.
Abstract: [Read at a RESEARCH METHODS MEETING of the SOCIETY, April 8th, 1964, Professor D. V. LINDLEY in the Chair] SUMMARY In the analysis of data it is often assumed that observations Yl, Y2, *-, Yn are independently normally distributed with constant variance and with expectations specified by a model linear in a set of parameters 0. In this paper we make the less restrictive assumption that such a normal, homoscedastic, linear model is appropriate after some suitable transformation has been applied to the y's. Inferences about the transformation and about the parameters of the linear model are made by computing the likelihood function and the relevant posterior distribution. The contributions of normality, homoscedasticity and additivity to the transformation are separated. The relation of the present methods to earlier procedures for finding transformations is discussed. The methods are illustrated with examples.

12,158 citations

01 Jan 1972
TL;DR: The drum mallets disclosed in this article are adjustable, by the percussion player, as to balance, overall weight, head characteristics and tone production of the mallet, whereby the adjustment can be readily obtained.
Abstract: The drum mallets disclosed are adjustable, by the percussion player, as to weight and/or balance and/or head characteristics, so as to vary the "feel" of the mallet, and thus also the tonal effect obtainable when playing upon kettle-drums, snare-drums, and other percussion instruments; and, typically, the mallet has frictionally slidable, removable and replaceable, external balancing mass means, positionable to serve as the striking head of the mallet, whereby the adjustment as to balance, overall weight, head characteristics and tone production may be readily obtained. In some forms, the said mass means regularly serves as a removable and replaceable striking head; while in other forms, the mass means comprises one or more thin elongated tubes having a frictionally-gripping fit on an elongated mallet body, so as to be manually slidable thereon but tight enough to avoid dislodgment under normal playing action; and such a tubular member may be slidable to the head-end of the mallet to serve as a striking head or it may be slidable to a position to serve as a hand grip; and one or more such tubular members may be placed in various positions along the length of the mallet. The mallet body may also have a tapered element at the head-end to assure retention of mass members especially of enlarged-head types; and the disclosure further includes such heads embodying a relatively hard inner portion and a relatively soft outer covering.

10,148 citations

Journal ArticleDOI
TL;DR: Efficient methods of analysis of randomized clinical trials in which the authors wish to compare the duration of survival among different groups of patients are described.
Abstract: Part I of this report appeared in the previous issue (Br. J. Cancer (1976) 34,585), and discussed the design of randomized clinical trials. Part II now describes efficient methods of analysis of randomized clinical trials in which we wish to compare the duration of survival (or the time until some other untoward event first occurs) among different groups of patients. It is intended to enable physicians without statistical training either to analyse such data themselves using life tables, the logrank test and retrospective stratification, or, when such analyses are presented, to appreciate them more critically, but the discussion may also be of interest to statisticians who have not yet specialized in clinical trial analyses.

8,334 citations

Book
01 Jan 1984
TL;DR: In this article, the authors give a concise account of the analysis of survival data, focusing on new theory on the relationship between survival factors and identified explanatory variables and conclude with bibliographic notes and further results that can be used for student exercises.
Abstract: The objective of this book is to give a concise account of the analysis of survival data. The book is intended both for the applied statistician and for a wider statistical audience wanting an introduction to this field. Particular attention is paid to new theory on the relationship between survival factors and identified explanatory variables. Each chapter concludes with bibliographic notes and outline statements of further results that can be used for student exercises. (ANNOTATION)

6,299 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: In this article, a new estimate minimum information theoretical criterion estimate (MAICE) is introduced for the purpose of statistical identification, which is free from the ambiguities inherent in the application of conventional hypothesis testing procedure.
Abstract: The history of the development of statistical hypothesis testing in time series analysis is reviewed briefly and it is pointed out that the hypothesis testing procedure is not adequately defined as the procedure for statistical model identification. The classical maximum likelihood estimation procedure is reviewed and a new estimate minimum information theoretical criterion (AIC) estimate (MAICE) which is designed for the purpose of statistical identification is introduced. When there are several competing models the MAICE is defined by the model and the maximum likelihood estimates of the parameters which give the minimum of AIC defined by AIC = (-2)log-(maximum likelihood) + 2(number of independently adjusted parameters within the model). MAICE provides a versatile procedure for statistical model identification which is free from the ambiguities inherent in the application of conventional hypothesis testing procedure. The practical utility of MAICE in time series analysis is demonstrated with some numerical examples.

47,133 citations

Journal ArticleDOI
TL;DR: This work presents DESeq2, a method for differential analysis of count data, using shrinkage estimation for dispersions and fold changes to improve stability and interpretability of estimates, which enables a more quantitative analysis focused on the strength rather than the mere presence of differential expression.
Abstract: In comparative high-throughput sequencing assays, a fundamental task is the analysis of count data, such as read counts per gene in RNA-seq, for evidence of systematic changes across experimental conditions. Small replicate numbers, discreteness, large dynamic range and the presence of outliers require a suitable statistical approach. We present DESeq2, a method for differential analysis of count data, using shrinkage estimation for dispersions and fold changes to improve stability and interpretability of estimates. This enables a more quantitative analysis focused on the strength rather than the mere presence of differential expression. The DESeq2 package is available at http://www.bioconductor.org/packages/release/bioc/html/DESeq2.html .

47,038 citations

Journal ArticleDOI
TL;DR: The method of classifying comorbidity provides a simple, readily applicable and valid method of estimating risk of death fromComorbid disease for use in longitudinal studies and further work in larger populations is still required to refine the approach.

39,961 citations

Journal ArticleDOI
TL;DR: In this review the usual methods applied in systematic reviews and meta-analyses are outlined, and the most common procedures for combining studies with binary outcomes are described, illustrating how they can be done using Stata commands.

31,656 citations

Book
01 Jan 2001
TL;DR: This is the essential companion to Jeffrey Wooldridge's widely-used graduate text Econometric Analysis of Cross Section and Panel Data (MIT Press, 2001).
Abstract: The second edition of this acclaimed graduate text provides a unified treatment of two methods used in contemporary econometric research, cross section and data panel methods. By focusing on assumptions that can be given behavioral content, the book maintains an appropriate level of rigor while emphasizing intuitive thinking. The analysis covers both linear and nonlinear models, including models with dynamics and/or individual heterogeneity. In addition to general estimation frameworks (particular methods of moments and maximum likelihood), specific linear and nonlinear methods are covered in detail, including probit and logit models and their multivariate, Tobit models, models for count data, censored and missing data schemes, causal (or treatment) effects, and duration analysis. Econometric Analysis of Cross Section and Panel Data was the first graduate econometrics text to focus on microeconomic data structures, allowing assumptions to be separated into population and sampling assumptions. This second edition has been substantially updated and revised. Improvements include a broader class of models for missing data problems; more detailed treatment of cluster problems, an important topic for empirical researchers; expanded discussion of "generalized instrumental variables" (GIV) estimation; new coverage (based on the author's own recent research) of inverse probability weighting; a more complete framework for estimating treatment effects with panel data, and a firmly established link between econometric approaches to nonlinear panel data and the "generalized estimating equation" literature popular in statistics and other fields. New attention is given to explaining when particular econometric methods can be applied; the goal is not only to tell readers what does work, but why certain "obvious" procedures do not. The numerous included exercises, both theoretical and computer-based, allow the reader to extend methods covered in the text and discover new insights.

28,298 citations