scispace - formally typeset
Search or ask a question
Author

Thomas A. Gerds

Bio: Thomas A. Gerds is an academic researcher from University of Copenhagen. The author has contributed to research in topics: Medicine & Population. The author has an hindex of 53, co-authored 291 publications receiving 11637 citations. Previous affiliations of Thomas A. Gerds include National Heart Foundation of Australia & Copenhagen University Hospital.


Papers
More filters
Journal ArticleDOI
TL;DR: It is suggested that reporting discrimination and calibration will always be important for a prediction model and decision-analytic measures should be reported if the predictive model is to be used for clinical decisions.
Abstract: The performance of prediction models can be assessed using a variety of methods and metrics. Traditional measures for binary and survival outcomes include the Brier score to indicate overall model performance, the concordance (or c) statistic for discriminative ability (or area under the receiver operating characteristic [ROC] curve), and goodness-of-fit statistics for calibration.Several new measures have recently been proposed that can be seen as refinements of discrimination measures, including variants of the c statistic for survival, reclassification tables, net reclassification improvement (NRI), and integrated discrimination improvement (IDI). Moreover, decision-analytic measures have been proposed, including decision curves to plot the net benefit achieved by making decisions based on model predictions.We aimed to define the role of these relatively novel approaches in the evaluation of the performance of prediction models. For illustration, we present a case study of predicting the presence of residual tumor versus benign tissue in patients with testicular cancer (n = 544 for model development, n = 273 for external validation).We suggest that reporting discrimination and calibration will always be important for a prediction model. Decision-analytic measures should be reported if the predictive model is to be used for clinical decisions. Other measures of performance may be warranted in specific applications, such as reclassification metrics to gain insight into the value of adding a novel predictor to an established model.

3,473 citations

Journal ArticleDOI
TL;DR: It is shown that a modified version of this estimator is consistent even when censoring and event times are only conditionally independent given the covariates, and is derived on the basis of regression models for the censoring distribution.
Abstract: In survival analysis with censored data the mean squared error of prediction can be estimated by weighted averages of time-dependent residuals. Graf et al. (1999) suggested a robust weighting scheme based on the assumption that the censoring mechanism is independent of the covariates. We show consistency of the estimator. Furthermore, we show that a modified version of this estimator is consistent even when censoring and event times are only conditionally independent given the covariates. The modified estimators are derived on the basis of regression models for the censoring distribution. A simulation study and a real data example illustrate the results.

368 citations

Journal ArticleDOI
TL;DR: The R package pec is surveyed, showing how the functionality of pec can be extended to yet unsupported prediction models, and implemented support for random forest prediction models based on the R-packages randomSurvivalForest and party.
Abstract: Prediction error curves are increasingly used to assess and compare predictions in survival analysis. This article surveys the R package pec which provides a set of functions for efficient computation of prediction error curves. The software implements inverse probability of censoring weights to deal with right censored data and several variants of cross-validation to deal with the apparent error problem. In principle, all kinds of prediction models can be assessed, and the package readily supports most traditional regression modeling strategies, like Cox regression or additive hazard regression, as well as state of the art machine learning methods such as random forests, a nonparametric method which provides promising alternatives to traditional strategies in low and high-dimensional settings. We show how the functionality of pec can be extended to yet unsupported prediction models. As an example, we implement support for random forest prediction models based on the R-packages randomSurvivalForest and party. Using data of the Copenhagen Stroke Study we use pec to compare random forests to a Cox regression model derived from stepwise variable selection. Reproducible results on the user level are given for publicly available data from the German breast cancer study group.

365 citations

Journal ArticleDOI
19 Jun 2020-JAMA
TL;DR: Findings do not support discontinuation of ACEI/ARB medications that are clinically indicated in the context of the COVID-19 pandemic, and prior use ofACEI/ARBs was not significantly associated with CO VID-19 diagnosis among patients with hypertension or with mortality or severe disease among patients diagnosed as having COIDs.
Abstract: Importance It has been hypothesized that angiotensin-converting enzyme inhibitors (ACEIs)/angiotensin receptor blockers (ARBs) may make patients more susceptible to coronavirus disease 2019 (COVID-19) and to worse outcomes through upregulation of the functional receptor of the virus, angiotensin-converting enzyme 2. Objective To examine whether use of ACEI/ARBs was associated with COVID-19 diagnosis and worse outcomes in patients with COVID-19. Design, setting, and participants To examine outcomes among patients with COVID-19, a retrospective cohort study using data from Danish national administrative registries was conducted. Patients with COVID-19 from February 22 to May 4, 2020, were identified using ICD-10 codes and followed up from day of diagnosis to outcome or end of study period (May 4, 2020). To examine susceptibility to COVID-19, a Cox regression model with a nested case-control framework was used to examine the association between use of ACEI/ARBs vs other antihypertensive drugs and the incidence rate of a COVID-19 diagnosis in a cohort of patients with hypertension from February 1 to May 4, 2020. Exposures ACEI/ARB use was defined as prescription fillings 6 months prior to the index date. Main outcomes and measures In the retrospective cohort study, the primary outcome was death, and a secondary outcome was a composite outcome of death or severe COVID-19. In the nested case-control susceptibility analysis, the outcome was COVID-19 diagnosis. Results In the retrospective cohort study, 4480 patients with COVID-19 were included (median age, 54.7 years [interquartile range, 40.9-72.0]; 47.9% men). There were 895 users (20.0%) of ACEI/ARBs and 3585 nonusers (80.0%). In the ACEI/ARB group, 18.1% died within 30 days vs 7.3% in the nonuser group, but this association was not significant after adjustment for age, sex, and medical history (adjusted hazard ratio [HR], 0.83 [95% CI, 0.67-1.03]). Death or severe COVID-19 occurred in 31.9% of ACEI/ARB users vs 14.2% of nonusers by 30 days (adjusted HR, 1.04 [95% CI, 0.89-1.23]). In the nested case-control analysis of COVID-19 susceptibility, 571 patients with COVID-19 and prior hypertension (median age, 73.9 years; 54.3% men) were compared with 5710 age- and sex-matched controls with prior hypertension but not COVID-19. Among those with COVID-19, 86.5% used ACEI/ARBs vs 85.4% of controls; ACEI/ARB use compared with other antihypertensive drugs was not significantly associated with higher incidence of COVID-19 (adjusted HR, 1.05 [95% CI, 0.80-1.36]). Conclusions and relevance Prior use of ACEI/ARBs was not significantly associated with COVID-19 diagnosis among patients with hypertension or with mortality or severe disease among patients diagnosed as having COVID-19. These findings do not support discontinuation of ACEI/ARB medications that are clinically indicated in the context of the COVID-19 pandemic.

325 citations

Journal ArticleDOI
TL;DR: It is found that bystander CPR and defibrillation were associated with risks of brain damage or nursing home admission and of death from any cause that were significantly lower than those associated with no bystander resuscitation.
Abstract: BackgroundThe effect of bystander interventions on long-term functional outcomes among survivors of out-of-hospital cardiac arrest has not been extensively studied. MethodsWe linked nationwide data on out-of-hospital cardiac arrests in Denmark to functional outcome data and reported the 1-year risks of anoxic brain damage or nursing home admission and of death from any cause among patients who survived to day 30 after an out-of-hospital cardiac arrest. We analyzed risks according to whether bystander cardiopulmonary resuscitation (CPR) or defibrillation was performed and evaluated temporal changes in bystander interventions and outcomes. ResultsAmong the 2855 patients who were 30-day survivors of an out-of-hospital cardiac arrest during the period from 2001 through 2012, a total of 10.5% had brain damage or were admitted to a nursing home and 9.7% died during the 1-year follow-up period. During the study period, among the 2084 patients who had cardiac arrests that were not witnessed by emergency medical s...

251 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Christopher M. Bishop1
01 Jan 2006
TL;DR: Probability distributions of linear models for regression and classification are given in this article, along with a discussion of combining models and combining models in the context of machine learning and classification.
Abstract: Probability Distributions.- Linear Models for Regression.- Linear Models for Classification.- Neural Networks.- Kernel Methods.- Sparse Kernel Machines.- Graphical Models.- Mixture Models and EM.- Approximate Inference.- Sampling Methods.- Continuous Latent Variables.- Sequential Data.- Combining Models.

10,141 citations

Journal ArticleDOI
TL;DR: March 5, 2019 e1 WRITING GROUP MEMBERS Emelia J. Virani, MD, PhD, FAHA, Chair Elect On behalf of the American Heart Association Council on Epidemiology and Prevention Statistics Committee and Stroke Statistics Subcommittee.
Abstract: March 5, 2019 e1 WRITING GROUP MEMBERS Emelia J. Benjamin, MD, ScM, FAHA, Chair Paul Muntner, PhD, MHS, FAHA, Vice Chair Alvaro Alonso, MD, PhD, FAHA Marcio S. Bittencourt, MD, PhD, MPH Clifton W. Callaway, MD, FAHA April P. Carson, PhD, MSPH, FAHA Alanna M. Chamberlain, PhD Alexander R. Chang, MD, MS Susan Cheng, MD, MMSc, MPH, FAHA Sandeep R. Das, MD, MPH, MBA, FAHA Francesca N. Delling, MD, MPH Luc Djousse, MD, ScD, MPH Mitchell S.V. Elkind, MD, MS, FAHA Jane F. Ferguson, PhD, FAHA Myriam Fornage, PhD, FAHA Lori Chaffin Jordan, MD, PhD, FAHA Sadiya S. Khan, MD, MSc Brett M. Kissela, MD, MS Kristen L. Knutson, PhD Tak W. Kwan, MD, FAHA Daniel T. Lackland, DrPH, FAHA Tené T. Lewis, PhD Judith H. Lichtman, PhD, MPH, FAHA Chris T. Longenecker, MD Matthew Shane Loop, PhD Pamela L. Lutsey, PhD, MPH, FAHA Seth S. Martin, MD, MHS, FAHA Kunihiro Matsushita, MD, PhD, FAHA Andrew E. Moran, MD, MPH, FAHA Michael E. Mussolino, PhD, FAHA Martin O’Flaherty, MD, MSc, PhD Ambarish Pandey, MD, MSCS Amanda M. Perak, MD, MS Wayne D. Rosamond, PhD, MS, FAHA Gregory A. Roth, MD, MPH, FAHA Uchechukwu K.A. Sampson, MD, MBA, MPH, FAHA Gary M. Satou, MD, FAHA Emily B. Schroeder, MD, PhD, FAHA Svati H. Shah, MD, MHS, FAHA Nicole L. Spartano, PhD Andrew Stokes, PhD David L. Tirschwell, MD, MS, MSc, FAHA Connie W. Tsao, MD, MPH, Vice Chair Elect Mintu P. Turakhia, MD, MAS, FAHA Lisa B. VanWagner, MD, MSc, FAST John T. Wilkins, MD, MS, FAHA Sally S. Wong, PhD, RD, CDN, FAHA Salim S. Virani, MD, PhD, FAHA, Chair Elect On behalf of the American Heart Association Council on Epidemiology and Prevention Statistics Committee and Stroke Statistics Subcommittee

5,739 citations

Journal ArticleDOI
TL;DR: This year's edition of the Statistical Update includes data on the monitoring and benefits of cardiovascular health in the population, metrics to assess and monitor healthy diets, an enhanced focus on social determinants of health, a focus on the global burden of cardiovascular disease, and further evidence-based approaches to changing behaviors, implementation strategies, and implications of the American Heart Association’s 2020 Impact Goals.
Abstract: Background: The American Heart Association, in conjunction with the National Institutes of Health, annually reports on the most up-to-date statistics related to heart disease, stroke, and cardiovas...

5,078 citations

Journal ArticleDOI
TL;DR: The theory of proper scoring rules on general probability spaces is reviewed and developed, and the intuitively appealing interval score is proposed as a utility function in interval estimation that addresses width as well as coverage.
Abstract: Scoring rules assess the quality of probabilistic forecasts, by assigning a numerical score based on the predictive distribution and on the event or value that materializes. A scoring rule is proper if the forecaster maximizes the expected score for an observation drawn from the distributionF if he or she issues the probabilistic forecast F, rather than G ≠ F. It is strictly proper if the maximum is unique. In prediction problems, proper scoring rules encourage the forecaster to make careful assessments and to be honest. In estimation problems, strictly proper scoring rules provide attractive loss and utility functions that can be tailored to the problem at hand. This article reviews and develops the theory of proper scoring rules on general probability spaces, and proposes and discusses examples thereof. Proper scoring rules derive from convex functions and relate to information measures, entropy functions, and Bregman divergences. In the case of categorical variables, we prove a rigorous version of the ...

4,644 citations