scispace - formally typeset
Search or ask a question
Author

Georg Heinze

Bio: Georg Heinze is an academic researcher from Medical University of Vienna. The author has contributed to research in topics: Population & Transplantation. The author has an hindex of 63, co-authored 354 publications receiving 16391 citations. Previous affiliations of Georg Heinze include Baylor College of Medicine & Technische Universität Darmstadt.


Papers
More filters
Journal ArticleDOI
07 Apr 2020-BMJ
TL;DR: Proposed models for covid-19 are poorly reported, at high risk of bias, and their reported performance is probably optimistic, according to a review of published and preprint reports.
Abstract: Objective To review and appraise the validity and usefulness of published and preprint reports of prediction models for diagnosing coronavirus disease 2019 (covid-19) in patients with suspected infection, for prognosis of patients with covid-19, and for detecting people in the general population at increased risk of covid-19 infection or being admitted to hospital with the disease. Design Living systematic review and critical appraisal by the COVID-PRECISE (Precise Risk Estimation to optimise covid-19 Care for Infected or Suspected patients in diverse sEttings) group. Data sources PubMed and Embase through Ovid, up to 1 July 2020, supplemented with arXiv, medRxiv, and bioRxiv up to 5 May 2020. Study selection Studies that developed or validated a multivariable covid-19 related prediction model. Data extraction At least two authors independently extracted data using the CHARMS (critical appraisal and data extraction for systematic reviews of prediction modelling studies) checklist; risk of bias was assessed using PROBAST (prediction model risk of bias assessment tool). Results 37 421 titles were screened, and 169 studies describing 232 prediction models were included. The review identified seven models for identifying people at risk in the general population; 118 diagnostic models for detecting covid-19 (75 were based on medical imaging, 10 to diagnose disease severity); and 107 prognostic models for predicting mortality risk, progression to severe disease, intensive care unit admission, ventilation, intubation, or length of hospital stay. The most frequent types of predictors included in the covid-19 prediction models are vital signs, age, comorbidities, and image features. Flu-like symptoms are frequently predictive in diagnostic models, while sex, C reactive protein, and lymphocyte counts are frequent prognostic factors. Reported C index estimates from the strongest form of validation available per model ranged from 0.71 to 0.99 in prediction models for the general population, from 0.65 to more than 0.99 in diagnostic models, and from 0.54 to 0.99 in prognostic models. All models were rated at high or unclear risk of bias, mostly because of non-representative selection of control patients, exclusion of patients who had not experienced the event of interest by the end of the study, high risk of model overfitting, and unclear reporting. Many models did not include a description of the target population (n=27, 12%) or care setting (n=75, 32%), and only 11 (5%) were externally validated by a calibration plot. The Jehi diagnostic model and the 4C mortality score were identified as promising models. Conclusion Prediction models for covid-19 are quickly entering the academic literature to support medical decision making at a time when they are urgently needed. This review indicates that almost all pubished prediction models are poorly reported, and at high risk of bias such that their reported predictive performance is probably optimistic. However, we have identified two (one diagnostic and one prognostic) promising models that should soon be validated in multiple cohorts, preferably through collaborative efforts and data sharing to also allow an investigation of the stability and heterogeneity in their performance across populations and settings. Details on all reviewed models are publicly available at https://www.covprecise.org/. Methodological guidance as provided in this paper should be followed because unreliable predictions could cause more harm than benefit in guiding clinical decisions. Finally, prediction model authors should adhere to the TRIPOD (transparent reporting of a multivariable prediction model for individual prognosis or diagnosis) reporting guideline. Systematic review registration Protocol https://osf.io/ehc47/, registration https://osf.io/wy245. Readers’ note This article is a living systematic review that will be updated to reflect emerging evidence. Updates may occur for up to two years from the date of original publication. This version is update 3 of the original article published on 7 April 2020 (BMJ 2020;369:m1328). Previous updates can be found as data supplements (https://www.bmj.com/content/369/bmj.m1328/related#datasupp). When citing this paper please consider adding the update number and date of access for clarity.

2,183 citations

Journal ArticleDOI
TL;DR: A procedure by Firth originally developed to reduce the bias of maximum likelihood estimates is shown to provide an ideal solution to separation and produces finite parameter estimates by means of penalized maximum likelihood estimation.
Abstract: The phenomenon of separation or monotone likelihood is observed in the fitting process of a logistic model if the likelihood converges while at least one parameter estimate diverges to +/- infinity. Separation primarily occurs in small samples with several unbalanced and highly predictive risk factors. A procedure by Firth originally developed to reduce the bias of maximum likelihood estimates is shown to provide an ideal solution to separation. It produces finite parameter estimates by means of penalized maximum likelihood estimation. Corresponding Wald tests and confidence intervals are available but it is shown that penalized likelihood ratio tests and profile penalized likelihood confidence intervals are often preferable. The clear advantage of the procedure over previous options of analysis is impressively demonstrated by the statistical analysis of two cancer studies.

1,628 citations

Journal ArticleDOI
TL;DR: In this article, the authors provide an overview of variable selection methods that are based on significance or information criteria, penalized likelihood, change-in-estimate criterion, background knowledge, or combinations thereof.
Abstract: Statistical models support medical research by facilitating individualized outcome prognostication conditional on independent variables or by estimating effects of risk factors adjusted for covariates. Theory of statistical models is well-established if the set of independent variables to consider is fixed and small. Hence, we can assume that effect estimates are unbiased and the usual methods for confidence interval estimation are valid. In routine work, however, it is not known a priori which covariates should be included in a model, and often we are confronted with the number of candidate variables in the range 10-30. This number is often too large to be considered in a statistical model. We provide an overview of various available variable selection methods that are based on significance or information criteria, penalized likelihood, the change-in-estimate criterion, background knowledge, or combinations thereof. These methods were usually developed in the context of a linear regression model and then transferred to more generalized linear models or models for censored survival data. Variable selection, in particular if used in explanatory modeling where effect estimates are of central interest, can compromise stability of a final model, unbiasedness of regression coefficients, and validity of p-values or confidence intervals. Therefore, we give pragmatic recommendations for the practicing statistician on application of variable selection methods in general (low-dimensional) modeling problems and on performing stability investigations and inference. We also propose some quantities based on resampling the entire variable selection process to be routinely reported by software packages offering automated variable selection algorithms.

783 citations

Journal ArticleDOI
TL;DR: By use of a simple scoring system, the assessment of the recurrence risk in patients with a first unprovoked VTE and without strong thrombophilic defects can be improved.
Abstract: Background— Predicting the risk of recurrent venous thromboembolism (VTE) in an individual patient is often not feasible. We aimed to develop a simple risk assessment model that improves prediction of the recurrence risk. Methods and Results— In a prospective cohort study, 929 patients with a first unprovoked VTE were followed up for a median of 43.3 months after discontinuation of anticoagulation. We excluded patients with a strong thrombophilic defect such as a natural inhibitor deficiency, the lupus anticoagulant, and homozygous or combined defects. A total of 176 patients (18.9%) had recurrent VTE. Preselected clinical and laboratory variables (age, sex, location of VTE, body mass index, factor V Leiden, prothrombin G20210A mutation, D-dimer, and in vitro thrombin generation) were analyzed in a Cox proportional hazards model, and those variables that were significantly associated with recurrence were used to compute risk scores. Male sex (hazard ratio versus female sex 1.90, 95% confidence interval 1....

454 citations

Journal ArticleDOI
TL;DR: By combination of the self-learning loop for optimized optical preparation and improved dynamical decoupling, this work extends EIT storage times in a doped solid above 40 s and demonstrates storage of images by EIT for 1 min, a new benchmark for EIT-based memories.
Abstract: The maximal storage duration is an important benchmark for memories. In quantized media, storage times are typically limited due to stochastic interactions with the environment. Also, optical memories based on electromagnetically induced transparency (EIT) suffer strongly from such decoherent effects. External magnetic control fields may reduce decoherence and increase EIT storage times considerably but also lead to complicated multilevel structures. These are hard to prepare perfectly in order to push storage times toward the theoretical limit, i.e., the population lifetime T(1). We present a self-learning evolutionary strategy to efficiently drive an EIT-based memory. By combination of the self-learning loop for optimized optical preparation and improved dynamical decoupling, we extend EIT storage times in a doped solid above 40 s. Moreover, we demonstrate storage of images by EIT for 1 min. These ultralong storage times set a new benchmark for EIT-based memories. The concepts are also applicable to other storage protocols.

351 citations


Cited by
More filters
Journal ArticleDOI
01 Apr 1988-Nature
TL;DR: In this paper, a sedimentological core and petrographic characterisation of samples from eleven boreholes from the Lower Carboniferous of Bowland Basin (Northwest England) is presented.
Abstract: Deposits of clastic carbonate-dominated (calciclastic) sedimentary slope systems in the rock record have been identified mostly as linearly-consistent carbonate apron deposits, even though most ancient clastic carbonate slope deposits fit the submarine fan systems better. Calciclastic submarine fans are consequently rarely described and are poorly understood. Subsequently, very little is known especially in mud-dominated calciclastic submarine fan systems. Presented in this study are a sedimentological core and petrographic characterisation of samples from eleven boreholes from the Lower Carboniferous of Bowland Basin (Northwest England) that reveals a >250 m thick calciturbidite complex deposited in a calciclastic submarine fan setting. Seven facies are recognised from core and thin section characterisation and are grouped into three carbonate turbidite sequences. They include: 1) Calciturbidites, comprising mostly of highto low-density, wavy-laminated bioclast-rich facies; 2) low-density densite mudstones which are characterised by planar laminated and unlaminated muddominated facies; and 3) Calcidebrites which are muddy or hyper-concentrated debrisflow deposits occurring as poorly-sorted, chaotic, mud-supported floatstones. These

9,929 citations

Journal ArticleDOI
TL;DR: Estimates of expected health outcomes for larger societies are included, where data exist, and the level of evidence and the strength of recommendation of particular treatment options are weighed and graded according to pre-defined scales.
Abstract: Guidelines summarize and evaluate all currently available evidence on a particular issue with the aim of assisting physicians in selecting the best management strategy for an individual patient suffering from a given condition, taking into account the impact on outcome, as well as the risk–benefit ratio of particular diagnostic or therapeutic means. Guidelines are no substitutes for textbooks. The legal implications of medical guidelines have been discussed previously. A large number of Guidelines have been issued in recent years by the European Society of Cardiology (ESC) as well as by other societies and organizations. Because of the impact on clinical practice, quality criteria for development of guidelines have been established in order to make all decisions transparent to the user. The recommendations for formulating and issuing ESC Guidelines can be found on the ESC Web Site (http://www.escardio.org/guidelines-surveys/esc-guidelines/about/Pages/rules-writing.aspx). In brief, experts in the field are selected and undertake a comprehensive review of the published evidence for management and/or prevention of a given condition. A critical evaluation of diagnostic and therapeutic procedures is performed, including assessment of the risk–benefit ratio. Estimates of expected health outcomes for larger societies are included, where data exist. The level of evidence and the strength of recommendation of particular treatment options are weighed and graded according to pre-defined scales, as outlined in Tables 1 and 2 . View this table: Table 1 Classes of recommendations View this table: Table 2 Levels of evidence The experts of the writing panels have provided disclosure statements of all relationships they may have that might be perceived as real or potential sources of conflicts of interest. These disclosure forms are kept on file at the European Heart House, headquarters of the ESC. Any changes in conflict of interest that arise during the writing period must be notified to the ESC. The Task Force report received its entire financial support from …

5,329 citations

Journal ArticleDOI
01 Nov 2016-Europace
TL;DR: The Task Force for the management of atrial fibrillation of the European Society of Cardiology has been endorsed by the European Stroke Organisation (ESO).
Abstract: The Task Force for the management of atrial fibrillation of the European Society of Cardiology (ESC) Developed with the special contribution of the European Heart Rhythm Association (EHRA) of the ESC Endorsed by the European Stroke Organisation (ESO)

5,255 citations