scispace - formally typeset
Search or ask a question
Author

Kym I E Snell

Bio: Kym I E Snell is an academic researcher from Keele University. The author has contributed to research in topics: Medicine & Population. The author has an hindex of 18, co-authored 57 publications receiving 3108 citations. Previous affiliations of Kym I E Snell include Arthritis Research UK & University of Leicester.

Papers published on a yearly basis

Papers
More filters
Journal ArticleDOI
07 Apr 2020-BMJ
TL;DR: Proposed models for covid-19 are poorly reported, at high risk of bias, and their reported performance is probably optimistic, according to a review of published and preprint reports.
Abstract: Objective To review and appraise the validity and usefulness of published and preprint reports of prediction models for diagnosing coronavirus disease 2019 (covid-19) in patients with suspected infection, for prognosis of patients with covid-19, and for detecting people in the general population at increased risk of covid-19 infection or being admitted to hospital with the disease. Design Living systematic review and critical appraisal by the COVID-PRECISE (Precise Risk Estimation to optimise covid-19 Care for Infected or Suspected patients in diverse sEttings) group. Data sources PubMed and Embase through Ovid, up to 1 July 2020, supplemented with arXiv, medRxiv, and bioRxiv up to 5 May 2020. Study selection Studies that developed or validated a multivariable covid-19 related prediction model. Data extraction At least two authors independently extracted data using the CHARMS (critical appraisal and data extraction for systematic reviews of prediction modelling studies) checklist; risk of bias was assessed using PROBAST (prediction model risk of bias assessment tool). Results 37 421 titles were screened, and 169 studies describing 232 prediction models were included. The review identified seven models for identifying people at risk in the general population; 118 diagnostic models for detecting covid-19 (75 were based on medical imaging, 10 to diagnose disease severity); and 107 prognostic models for predicting mortality risk, progression to severe disease, intensive care unit admission, ventilation, intubation, or length of hospital stay. The most frequent types of predictors included in the covid-19 prediction models are vital signs, age, comorbidities, and image features. Flu-like symptoms are frequently predictive in diagnostic models, while sex, C reactive protein, and lymphocyte counts are frequent prognostic factors. Reported C index estimates from the strongest form of validation available per model ranged from 0.71 to 0.99 in prediction models for the general population, from 0.65 to more than 0.99 in diagnostic models, and from 0.54 to 0.99 in prognostic models. All models were rated at high or unclear risk of bias, mostly because of non-representative selection of control patients, exclusion of patients who had not experienced the event of interest by the end of the study, high risk of model overfitting, and unclear reporting. Many models did not include a description of the target population (n=27, 12%) or care setting (n=75, 32%), and only 11 (5%) were externally validated by a calibration plot. The Jehi diagnostic model and the 4C mortality score were identified as promising models. Conclusion Prediction models for covid-19 are quickly entering the academic literature to support medical decision making at a time when they are urgently needed. This review indicates that almost all pubished prediction models are poorly reported, and at high risk of bias such that their reported predictive performance is probably optimistic. However, we have identified two (one diagnostic and one prognostic) promising models that should soon be validated in multiple cohorts, preferably through collaborative efforts and data sharing to also allow an investigation of the stability and heterogeneity in their performance across populations and settings. Details on all reviewed models are publicly available at https://www.covprecise.org/. Methodological guidance as provided in this paper should be followed because unreliable predictions could cause more harm than benefit in guiding clinical decisions. Finally, prediction model authors should adhere to the TRIPOD (transparent reporting of a multivariable prediction model for individual prognosis or diagnosis) reporting guideline. Systematic review registration Protocol https://osf.io/ehc47/, registration https://osf.io/wy245. Readers’ note This article is a living systematic review that will be updated to reflect emerging evidence. Updates may occur for up to two years from the date of original publication. This version is update 3 of the original article published on 7 April 2020 (BMJ 2020;369:m1328). Previous updates can be found as data supplements (https://www.bmj.com/content/369/bmj.m1328/related#datasupp). When citing this paper please consider adding the update number and date of access for clarity.

2,183 citations

Journal ArticleDOI
18 Mar 2020-BMJ
TL;DR: In this article, the authors provide guidance on how to calculate the sample size required to develop a clinical prediction model.
Abstract: Clinical prediction models aim to predict outcomes in individuals, to inform diagnosis or prognosis in healthcare. Hundreds of prediction models are published in the medical literature each year, yet many are developed using a dataset that is too small for the total number of participants or outcome events. This leads to inaccurate predictions and consequently incorrect healthcare decisions for some individuals. In this article, the authors provide guidance on how to calculate the sample size required to develop a clinical prediction model.

646 citations

Journal ArticleDOI
TL;DR: The minimum values of n and E (and subsequently the minimum number of events per predictor parameter, EPP) should be calculated to meet the following three criteria: small optimism in predictor effect estimates as defined by a global shrinkage factor of ≥0.9, aim to reduce overfitting conditional on a chosen p, and require prespecification of the model's anticipated Cox‐Snell R2.
Abstract: When designing a study to develop a new prediction model with binary or time-to-event outcomes, researchers should ensure their sample size is adequate in terms of the number of participants (n) and outcome events (E) relative to the number of predictor parameters (p) considered for inclusion. We propose that the minimum values of n and E (and subsequently the minimum number of events per predictor parameter, EPP) should be calculated to meet the following three criteria: (i) small optimism in predictor effect estimates as defined by a global shrinkage factor of ≥0.9, (ii) small absolute difference of ≤ 0.05 in the model's apparent and adjusted Nagelkerke's R2 , and (iii) precise estimation of the overall risk in the population. Criteria (i) and (ii) aim to reduce overfitting conditional on a chosen p, and require prespecification of the model's anticipated Cox-Snell R2 , which we show can be obtained from previous studies. The values of n and E that meet all three criteria provides the minimum sample size required for model development. Upon application of our approach, a new diagnostic model for Chagas disease requires an EPP of at least 4.8 and a new prognostic model for recurrent venous thromboembolism requires an EPP of at least 23. This reinforces why rules of thumb (eg, 10 EPP) should be avoided. Researchers might additionally ensure the sample size gives precise estimates of key predictor effects; this is especially important when key categorical predictors have few events in some categories, as this may substantially increase the numbers required.

425 citations

Journal ArticleDOI
30 Jan 2019-BMJ
TL;DR: Systematic reviews and meta-analyses are needed that summarise the evidence about the prognostic value of particular factors and the key steps involved in this review process are described.
Abstract: Prognostic factors are associated with the risk of future health outcomes in individuals with a particular health condition or some clinical start point (eg, a particular diagnosis). Research to identify genuine prognostic factors is important because these factors can help improve risk stratification, treatment, and lifestyle decisions, and the design of randomised trials. Although thousands of prognostic factor studies are published each year, often they are of variable quality and the findings are inconsistent. Systematic reviews and meta-analyses are therefore needed that summarise the evidence about the prognostic value of particular factors. In this article, the key steps involved in this review process are described.

346 citations

Journal ArticleDOI
22 Jun 2016-BMJ
TL;DR: Novel opportunities for external validation in big, combined datasets in e-health records and individual participant data meta-analysis are illustrated, drawing attention to methodological challenges and reporting issues.
Abstract: Access to big datasets from e-health records and individual participant data (IPD) meta-analysis is signalling a new advent of external validation studies for clinical prediction models. In this article, the authors illustrate novel opportunities for external validation in big, combined datasets, while drawing attention to methodological challenges and reporting issues.

345 citations


Cited by
More filters
01 Jan 2020
TL;DR: Prolonged viral shedding provides the rationale for a strategy of isolation of infected patients and optimal antiviral interventions in the future.
Abstract: Summary Background Since December, 2019, Wuhan, China, has experienced an outbreak of coronavirus disease 2019 (COVID-19), caused by the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). Epidemiological and clinical characteristics of patients with COVID-19 have been reported but risk factors for mortality and a detailed clinical course of illness, including viral shedding, have not been well described. Methods In this retrospective, multicentre cohort study, we included all adult inpatients (≥18 years old) with laboratory-confirmed COVID-19 from Jinyintan Hospital and Wuhan Pulmonary Hospital (Wuhan, China) who had been discharged or had died by Jan 31, 2020. Demographic, clinical, treatment, and laboratory data, including serial samples for viral RNA detection, were extracted from electronic medical records and compared between survivors and non-survivors. We used univariable and multivariable logistic regression methods to explore the risk factors associated with in-hospital death. Findings 191 patients (135 from Jinyintan Hospital and 56 from Wuhan Pulmonary Hospital) were included in this study, of whom 137 were discharged and 54 died in hospital. 91 (48%) patients had a comorbidity, with hypertension being the most common (58 [30%] patients), followed by diabetes (36 [19%] patients) and coronary heart disease (15 [8%] patients). Multivariable regression showed increasing odds of in-hospital death associated with older age (odds ratio 1·10, 95% CI 1·03–1·17, per year increase; p=0·0043), higher Sequential Organ Failure Assessment (SOFA) score (5·65, 2·61–12·23; p Interpretation The potential risk factors of older age, high SOFA score, and d-dimer greater than 1 μg/mL could help clinicians to identify patients with poor prognosis at an early stage. Prolonged viral shedding provides the rationale for a strategy of isolation of infected patients and optimal antiviral interventions in the future. Funding Chinese Academy of Medical Sciences Innovation Fund for Medical Sciences; National Science Grant for Distinguished Young Scholars; National Key Research and Development Program of China; The Beijing Science and Technology Project; and Major Projects of National Science and Technology on New Drug Creation and Development.

4,408 citations

Journal ArticleDOI
TL;DR: It is concluded that multiple Imputation for Nonresponse in Surveys should be considered as a legitimate method for answering the question of why people do not respond to survey questions.
Abstract: 25. Multiple Imputation for Nonresponse in Surveys. By D. B. Rubin. ISBN 0 471 08705 X. Wiley, Chichester, 1987. 258 pp. £30.25.

3,216 citations

Journal ArticleDOI
07 Apr 2020-BMJ
TL;DR: Proposed models for covid-19 are poorly reported, at high risk of bias, and their reported performance is probably optimistic, according to a review of published and preprint reports.
Abstract: Objective To review and appraise the validity and usefulness of published and preprint reports of prediction models for diagnosing coronavirus disease 2019 (covid-19) in patients with suspected infection, for prognosis of patients with covid-19, and for detecting people in the general population at increased risk of covid-19 infection or being admitted to hospital with the disease. Design Living systematic review and critical appraisal by the COVID-PRECISE (Precise Risk Estimation to optimise covid-19 Care for Infected or Suspected patients in diverse sEttings) group. Data sources PubMed and Embase through Ovid, up to 1 July 2020, supplemented with arXiv, medRxiv, and bioRxiv up to 5 May 2020. Study selection Studies that developed or validated a multivariable covid-19 related prediction model. Data extraction At least two authors independently extracted data using the CHARMS (critical appraisal and data extraction for systematic reviews of prediction modelling studies) checklist; risk of bias was assessed using PROBAST (prediction model risk of bias assessment tool). Results 37 421 titles were screened, and 169 studies describing 232 prediction models were included. The review identified seven models for identifying people at risk in the general population; 118 diagnostic models for detecting covid-19 (75 were based on medical imaging, 10 to diagnose disease severity); and 107 prognostic models for predicting mortality risk, progression to severe disease, intensive care unit admission, ventilation, intubation, or length of hospital stay. The most frequent types of predictors included in the covid-19 prediction models are vital signs, age, comorbidities, and image features. Flu-like symptoms are frequently predictive in diagnostic models, while sex, C reactive protein, and lymphocyte counts are frequent prognostic factors. Reported C index estimates from the strongest form of validation available per model ranged from 0.71 to 0.99 in prediction models for the general population, from 0.65 to more than 0.99 in diagnostic models, and from 0.54 to 0.99 in prognostic models. All models were rated at high or unclear risk of bias, mostly because of non-representative selection of control patients, exclusion of patients who had not experienced the event of interest by the end of the study, high risk of model overfitting, and unclear reporting. Many models did not include a description of the target population (n=27, 12%) or care setting (n=75, 32%), and only 11 (5%) were externally validated by a calibration plot. The Jehi diagnostic model and the 4C mortality score were identified as promising models. Conclusion Prediction models for covid-19 are quickly entering the academic literature to support medical decision making at a time when they are urgently needed. This review indicates that almost all pubished prediction models are poorly reported, and at high risk of bias such that their reported predictive performance is probably optimistic. However, we have identified two (one diagnostic and one prognostic) promising models that should soon be validated in multiple cohorts, preferably through collaborative efforts and data sharing to also allow an investigation of the stability and heterogeneity in their performance across populations and settings. Details on all reviewed models are publicly available at https://www.covprecise.org/. Methodological guidance as provided in this paper should be followed because unreliable predictions could cause more harm than benefit in guiding clinical decisions. Finally, prediction model authors should adhere to the TRIPOD (transparent reporting of a multivariable prediction model for individual prognosis or diagnosis) reporting guideline. Systematic review registration Protocol https://osf.io/ehc47/, registration https://osf.io/wy245. Readers’ note This article is a living systematic review that will be updated to reflect emerging evidence. Updates may occur for up to two years from the date of original publication. This version is update 3 of the original article published on 7 April 2020 (BMJ 2020;369:m1328). Previous updates can be found as data supplements (https://www.bmj.com/content/369/bmj.m1328/related#datasupp). When citing this paper please consider adding the update number and date of access for clarity.

2,183 citations

Journal ArticleDOI
TL;DR: Guidelines summarize and evaluate available evidence with the aim of assisting health professionals in proposing the best management strategies for an individual patient with a given condition.
Abstract: Guidelines summarize and evaluate available evidence with the aim of assisting health professionals in proposing the best management strategies for an individual patient with a given condition. Guidelines and their recommendations should facilitate decision making of health professionals in their daily practice. However, the final decisions concerning an individual patient must be made by the responsible health professional(s) in consultation with the patient and caregiver as appropriate.

2,079 citations

Journal ArticleDOI
Nicolas Vabret1, Graham J. Britton1, Conor Gruber1, Samarth Hegde1, Joel Kim1, Maria Kuksin1, Rachel Levantovsky1, Louise Malle1, Alvaro Moreira1, Matthew D. Park1, Luisanna Pia1, Emma Risson1, Miriam Saffern1, Bérengère Salomé1, Myvizhi Esai Selvan1, Matthew P. Spindler1, Jessica Tan1, Verena van der Heide1, Jill Gregory1, Konstantina Alexandropoulos1, Nina Bhardwaj1, Brian D. Brown1, Benjamin Greenbaum1, Zeynep H. Gümüş1, Dirk Homann1, Amir Horowitz1, Alice O. Kamphorst1, Maria A. Curotto de Lafaille1, Saurabh Mehandru1, Miriam Merad1, Robert M. Samstein1, Manasi Agrawal, Mark Aleynick, Meriem Belabed, Matthew Brown1, Maria Casanova-Acebes, Jovani Catalan, Monica Centa, Andrew Charap, Andrew K Chan, Steven T. Chen, Jonathan Chung, Cansu Cimen Bozkus, Evan Cody, Francesca Cossarini, Erica Dalla, Nicolas F. Fernandez, John A. Grout, Dan Fu Ruan, Pauline Hamon, Etienne Humblin, Divya Jha, Julia Kodysh, Andrew Leader, Matthew Lin, Katherine E. Lindblad, Daniel Lozano-Ojalvo, Gabrielle Lubitz, Assaf Magen, Zafar Mahmood2, Gustavo Martinez-Delgado, Jaime Mateus-Tique, Elliot Meritt, Chang Moon1, Justine Noel, Timothy O'Donnell, Miyo Ota, Tamar Plitt, Venu Pothula, Jamie Redes, Ivan Reyes Torres, Mark P. Roberto, Alfonso R. Sanchez-Paulete, Joan Shang, Alessandra Soares Schanoski, Maria Suprun, Michelle Tran, Natalie Vaninov, C. Matthias Wilk, Julio A. Aguirre-Ghiso, Dusan Bogunovic1, Judy H. Cho, Jeremiah J. Faith, Emilie K. Grasset, Peter S. Heeger, Ephraim Kenigsberg, Florian Krammer1, Uri Laserson1 
16 Jun 2020-Immunity
TL;DR: The current state of knowledge of innate and adaptive immune responses elicited by SARS-CoV-2 infection and the immunological pathways that likely contribute to disease severity and death are summarized.

1,350 citations