scispace - formally typeset
Search or ask a question
Author

Elizabeth Kemper

Other affiliations: Veterans Health Administration
Bio: Elizabeth Kemper is an academic researcher from Yale University. The author has contributed to research in topics: Sample size determination & Logistic regression. The author has an hindex of 1, co-authored 1 publications receiving 5523 citations. Previous affiliations of Elizabeth Kemper include Veterans Health Administration.

Papers
More filters
Journal ArticleDOI
TL;DR: Findings indicate that low EPV can lead to major problems, and the regression coefficients were biased in both positive and negative directions, and paradoxical associations (significance in the wrong direction) were increased.

6,490 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: The propensity score is a balancing score: conditional on the propensity score, the distribution of observed baseline covariates will be similar between treated and untreated subjects, and different causal average treatment effects and their relationship with propensity score analyses are described.
Abstract: The propensity score is the probability of treatment assignment conditional on observed baseline characteristics. The propensity score allows one to design and analyze an observational (nonrandomized) study so that it mimics some of the particular characteristics of a randomized controlled trial. In particular, the propensity score is a balancing score: conditional on the propensity score, the distribution of observed baseline covariates will be similar between treated and untreated subjects. I describe 4 different propensity score methods: matching on the propensity score, stratification on the propensity score, inverse probability of treatment weighting using the propensity score, and covariate adjustment using the propensity score. I describe balance diagnostics for examining whether the propensity score model has been adequately specified. Furthermore, I discuss differences between regression-based methods and propensity score-based methods for the analysis of observational data. I describe different causal average treatment effects and their relationship with propensity score analyses.

7,895 citations

BookDOI
01 Jan 2006
TL;DR: Regression models are frequently used to develop diagnostic, prognostic, and health resource utilization models in clinical, health services, outcomes, pharmacoeconomic, and epidemiologic research, and in a multitude of non-health-related areas.
Abstract: Regression models are frequently used to develop diagnostic, prognostic, and health resource utilization models in clinical, health services, outcomes, pharmacoeconomic, and epidemiologic research, and in a multitude of non-health-related areas. Regression models are also used to adjust for patient heterogeneity in randomized clinical trials, to obtain tests that are more powerful and valid than unadjusted treatment comparisons.

4,211 citations

Journal ArticleDOI
TL;DR: In virtually all medical domains, diagnostic and prognostic multivariable prediction models are being developed, validated, updated, and implemented with the aim to assist doctors and individuals in estimating probabilities and potentially influence their decision making.
Abstract: The TRIPOD (Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis) Statement includes a 22-item checklist, which aims to improve the reporting of studies developing, validating, or updating a prediction model, whether for diagnostic or prognostic purposes. The TRIPOD Statement aims to improve the transparency of the reporting of a prediction model study regardless of the study methods used. This explanation and elaboration document describes the rationale; clarifies the meaning of each item; and discusses why transparent reporting is important, with a view to assessing risk of bias and clinical usefulness of the prediction model. Each checklist item of the TRIPOD Statement is explained in detail and accompanied by published examples of good reporting. The document also provides a valuable reference of issues to consider when designing, conducting, and analyzing prediction model studies. To aid the editorial process and help peer reviewers and, ultimately, readers and systematic reviewers of prediction model studies, it is recommended that authors include a completed checklist in their submission. The TRIPOD checklist can also be downloaded from www.tripod-statement.org.

2,982 citations

Journal ArticleDOI
TL;DR: A large simulation study of other influences on confidence interval coverage, type I error, relative bias, and other model performance measures found a range of circumstances in which coverage and bias were within acceptable levels despite less than 10 EPV.
Abstract: The rule of thumb that logistic and Cox models should be used with a minimum of 10 outcome events per predictor variable (EPV), based on two simulation studies, may be too conservative. The authors conducted a large simulation study of other influences on confidence interval coverage, type I error, relative bias, and other model performance measures. They found a range of circumstances in which coverage and bias were within acceptable levels despite less than 10 EPV, as well as other factors that were as influential as or more influential than EPV. They conclude that this rule can be relaxed, in particular for sensitivity analyses undertaken to demonstrate adequate control of confounding.

2,943 citations

Journal ArticleDOI
TL;DR: The examples considered in this paper show the tension between the scientific rationale for using meta-regression and the difficult interpretative problems to which such analyses are prone.
Abstract: SUMMARY Appropriate methods for meta-regression applied to a set of clinical trials, and the limitations and pitfalls in interpretation, are insuciently recognized. Here we summarize recent research focusing on these issues, and consider three published examples of meta-regression in the light of this work. One principal methodological issue is that meta-regression should be weighted to take account of both within-trial variances of treatment eects and the residual between-trial heterogeneity (that is, heterogeneity not explained by the covariates in the regression). This corresponds to random eects meta-regression. The associations derived from meta-regressions are observational, and have a weaker interpretation than the causal relationships derived from randomized comparisons. This applies particularly when averages of patient characteristics in each trial are used as covariates in the regression. Data dredging is the main pitfall in reaching reliable conclusions from meta-regression. It can only be avoided by prespecication of covariates that will be investigated as potential sources of heterogeneity. However, in practice this is not always easy to achieve. The examples considered in this paper show the tension between the scientic rationale for using meta-regression and the dicult interpretative problems to which such analyses are prone. Copyright ? 2002 John Wiley & Sons, Ltd.

2,486 citations