scispace - formally typeset
Search or ask a question
Author

John D. Kalbfleisch

Bio: John D. Kalbfleisch is an academic researcher from University of Michigan. The author has contributed to research in topics: Estimator & Covariate. The author has an hindex of 54, co-authored 172 publications receiving 29190 citations. Previous affiliations of John D. Kalbfleisch include Dalhousie University & University of Washington.


Papers
More filters
Journal ArticleDOI
TL;DR: It is shown that a simple index using readily available laboratory results can identify CHC patients with significant fibrosis and cirrhosis with a high degree of accuracy and may decrease the need for staging liver biopsy specimens among patients with CHC.

3,637 citations

Journal ArticleDOI
TL;DR: In this article, the authors proposed a regression model for failure time distributions in the context of counting process models and showed that the model can be used to estimate the probability of failure.
Abstract: Preface.1. Introduction.1.1 Failure Time Data.1.2 Failure Time Distributions.1.3 Time Origins, Censoring, and Truncation.1.4 Estimation of the Survivor Function.1.5 Comparison of Survival Curves.1.6 Generalizations to Accommodate Delayed Entry.1.7 Counting Process Notation.Bibliographic Notes.Exercises and Complements.2. Failure Time Models.2.1 Introduction.2.2 Some Continuous Parametric Failure Time Models.2.3 Regression Models.2.4 Discrete Failure Time Models.Bibliographic Notes.Exercises and Complements.3. Inference in Parametric Models and Related Topics.3.1 Introduction.3.2 Censoring Mechanisms.3.3 Censored Samples from an Exponential Distribution.3.4 Large-Sample Likelihood Theory.3.5 Exponential Regression.3.6 Estimation in Log-Linear Regression Models.3.7 Illustrations in More Complex Data Sets.3.8 Discrimination Among Parametric Models.3.9 Inference with Interval Censoring.3.10 Discussion.Bibliographic Notes.Exercises and Complements.4. Relative Risk (Cox) Regression Models.4.1 Introduction.4.2 Estimation of beta.4.3 Estimation of the Baseline Hazard or Survivor Function.4.4 Inclusion of Strata.4.5 Illustrations.4.6 Counting Process Formulas. 4.7 Related Topics on the Cox Model.4.8 Sampling from Discrete Models.Bibliographic Notes.Exercises and Complements.5. Counting Processes and Asymptotic Theory.5.1 Introduction.5.2 Counting Processes and Intensity Functions.5.3 Martingales.5.4 Vector-Valued Martingales.5.5 Martingale Central Limit Theorem.5.6 Asymptotics Associated with Chapter 1.5.7 Asymptotic Results for the Cox Model.5.8 Asymptotic Results for Parametric Models.5.9 Efficiency of the Cox Model Estimator.5.10 Partial Likelihood Filtration.Bibliographic Notes.Exercises and Complements.6. Likelihood Construction and Further Results.6.1 Introduction.6.2 Likelihood Construction in Parametric Models.6.3 Time-Dependent Covariates and Further Remarks on Likelihood Construction.6.4 Time Dependence in the Relative Risk Model.6.5 Nonnested Conditioning Events.6.6 Residuals and Model Checking for the Cox Model.Bibliographic Notes.Exercises and Complements.7. Rank Regression and the Accelerated Failure Time Model.7.1 Introduction.7.2 Linear Rank Tests.7.3 Development and Properties of Linear Rank Tests.7.4 Estimation in the Accelerated Failure Time Model.7.5 Some Related Regression Models.Bibliographic Notes.Exercises and Complements.8. Competing Risks and Multistate Models.8.1 Introduction.8.2 Competing Risks.8.3 Life-History Processes.Bibliographic Notes.Exercises and Complements.9. Modeling and Analysis of Recurrent Event Data.9.1 Introduction.9.2 Intensity Processes for Recurrent Events.9.3 Overall Intensity Process Modeling and Estimation.9.4 Mean Process Modeling and Estimation.9.5 Conditioning on Aspects of the Counting Process History.Bibliographic Notes.Exercises and Complements.10. Analysis of Correlated Failure Time Data.10.1 Introduction.10.2 Regression Models for Correlated Failure Time Data.10.3 Representation and Estimation of the Bivariate Survivor Function.10.4 Pairwise Dependency Estimation.10.5 Illustration: Australian Twin Data.10.6 Approaches to Nonparametric Estimation of the Bivariate Survivor Function.10.7 Survivor Function Estimation in Higher Dimensions.Bibliographic Notes.Exercises and Complements.11. Additional Failure Time Data Topics.11.1 Introduction.11.2 Stratified Bivariate Failure Time Analysis.11.3 Fixed Study Period Survival Studies.11.4 Cohort Sampling and Case-Control Studies.11.5 Missing Covariate Data.11.6 Mismeasured Covariate Data.11.7 Sequential Testing with Failure Time Endpoints.11.8 Bayesian Analysis of the Proportional Hazards Model.11.9 Some Analyses of a Particular Data Set.Bibliographic Notes.Exercises and Complements.Glossary of Notation.Appendix A: Some Sets of Data.Appendix B: Supporting Technical Material.Bibliography.Author Index.Subject Index.

3,596 citations

Journal ArticleDOI
TL;DR: In this paper, Statistical Inference Under Order Restrictions (SINR) under order restrictions is discussed. But this paper is restricted to the case of order restrictions. And it is not applicable to the present paper.
Abstract: (1975). Statistical Inference Under Order Restrictions. Technometrics: Vol. 17, No. 1, pp. 139-140.

1,622 citations

Journal ArticleDOI
TL;DR: It is argued that the problem of estimation of failure rates under the removal of certain causes is not well posed until a mechanism for cause removal is specified, and a method involving the estimation of parameters that relate time-dependent risk indicators for some causes to cause-specific hazard functions for other causes is proposed for the study of interrelations among failure types.
Abstract: Distinct problems in the analysis of failure times with competing causes of failure include the estimation of treatment or exposure effects on specific failure types, the study of interrelations among failure types, and the estimation of failure rates for some causes given the removal of certain other failure types. The usual formation of these problems is in terms of conceptual or latent failure times for each failure type. This approach is criticized on the basis of unwarranted assumptions, lack of physical interpretation and identifiability problems. An alternative approach utilizing cause-specific hazard functions for observable quantities, including time-dependent covariates, is proposed. Cause-specific hazard functions are shown to be the basic estimable quantities in the competing risks framework. A method, involving the estimation of parameters that relate time-dependent risk indicators for some causes to cause-specific hazard functions for other causes, is proposed for the study of interrelations among failure types. Further, it is argued that the problem of estimation of failure rates under the removal of certain causes is not well posed until a mechanism for cause removal is specified. Following such a specification, one will sometimes be in a position to make sensible extrapolations from available data to situations involving cause removal. A clinical program in bone marrow transplantation for leukemia provides a setting for discussion and illustration of each of these ideas. Failure due to censoring in a survivorship study leads to further discussion.

1,429 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: The method of classifying comorbidity provides a simple, readily applicable and valid method of estimating risk of death fromComorbid disease for use in longitudinal studies and further work in larger populations is still required to refine the approach.

39,961 citations

Book
01 Jan 2001
TL;DR: This is the essential companion to Jeffrey Wooldridge's widely-used graduate text Econometric Analysis of Cross Section and Panel Data (MIT Press, 2001).
Abstract: The second edition of this acclaimed graduate text provides a unified treatment of two methods used in contemporary econometric research, cross section and data panel methods. By focusing on assumptions that can be given behavioral content, the book maintains an appropriate level of rigor while emphasizing intuitive thinking. The analysis covers both linear and nonlinear models, including models with dynamics and/or individual heterogeneity. In addition to general estimation frameworks (particular methods of moments and maximum likelihood), specific linear and nonlinear methods are covered in detail, including probit and logit models and their multivariate, Tobit models, models for count data, censored and missing data schemes, causal (or treatment) effects, and duration analysis. Econometric Analysis of Cross Section and Panel Data was the first graduate econometrics text to focus on microeconomic data structures, allowing assumptions to be separated into population and sampling assumptions. This second edition has been substantially updated and revised. Improvements include a broader class of models for missing data problems; more detailed treatment of cluster problems, an important topic for empirical researchers; expanded discussion of "generalized instrumental variables" (GIV) estimation; new coverage (based on the author's own recent research) of inverse probability weighting; a more complete framework for estimating treatment effects with panel data, and a firmly established link between econometric approaches to nonlinear panel data and the "generalized estimating equation" literature popular in statistics and other fields. New attention is given to explaining when particular econometric methods can be applied; the goal is not only to tell readers what does work, but why certain "obvious" procedures do not. The numerous included exercises, both theoretical and computer-based, allow the reader to extend methods covered in the text and discover new insights.

28,298 citations

Book ChapterDOI
TL;DR: The analysis of censored failure times is considered in this paper, where the hazard function is taken to be a function of the explanatory variables and unknown regression coefficients multiplied by an arbitrary and unknown function of time.
Abstract: The analysis of censored failure times is considered. It is assumed that on each individual arc available values of one or more explanatory variables. The hazard function (age-specific failure rate) is taken to be a function of the explanatory variables and unknown regression coefficients multiplied by an arbitrary and unknown function of time. A conditional likelihood is obtained, leading to inferences about the unknown regression coefficients. Some generalizations are outlined.

28,264 citations

Journal Article
TL;DR: This book by a teacher of statistics (as well as a consultant for "experimenters") is a comprehensive study of the philosophical background for the statistical design of experiment.
Abstract: THE DESIGN AND ANALYSIS OF EXPERIMENTS. By Oscar Kempthorne. New York, John Wiley and Sons, Inc., 1952. 631 pp. $8.50. This book by a teacher of statistics (as well as a consultant for \"experimenters\") is a comprehensive study of the philosophical background for the statistical design of experiment. It is necessary to have some facility with algebraic notation and manipulation to be able to use the volume intelligently. The problems are presented from the theoretical point of view, without such practical examples as would be helpful for those not acquainted with mathematics. The mathematical justification for the techniques is given. As a somewhat advanced treatment of the design and analysis of experiments, this volume will be interesting and helpful for many who approach statistics theoretically as well as practically. With emphasis on the \"why,\" and with description given broadly, the author relates the subject matter to the general theory of statistics and to the general problem of experimental inference. MARGARET J. ROBERTSON

13,333 citations

Journal ArticleDOI
TL;DR: This article proposes methods for combining estimates of the cause-specific hazard functions under the proportional hazards formulation, but these methods do not allow the analyst to directly assess the effect of a covariate on the marginal probability function.
Abstract: With explanatory covariates, the standard analysis for competing risks data involves modeling the cause-specific hazard functions via a proportional hazards assumption Unfortunately, the cause-specific hazard function does not have a direct interpretation in terms of survival probabilities for the particular failure type In recent years many clinicians have begun using the cumulative incidence function, the marginal failure probabilities for a particular cause, which is intuitively appealing and more easily explained to the nonstatistician The cumulative incidence is especially relevant in cost-effectiveness analyses in which the survival probabilities are needed to determine treatment utility Previously, authors have considered methods for combining estimates of the cause-specific hazard functions under the proportional hazards formulation However, these methods do not allow the analyst to directly assess the effect of a covariate on the marginal probability function In this article we pro

11,109 citations