scispace - formally typeset
Search or ask a question
Author

Lotty Hooft

Bio: Lotty Hooft is an academic researcher from University Medical Center Utrecht. The author has contributed to research in topics: Systematic review & Critical appraisal. The author has an hindex of 9, co-authored 13 publications receiving 1861 citations. Previous affiliations of Lotty Hooft include Oklahoma State University Center for Health Sciences.

Papers
More filters
Journal ArticleDOI
07 Apr 2020-BMJ
TL;DR: Proposed models for covid-19 are poorly reported, at high risk of bias, and their reported performance is probably optimistic, according to a review of published and preprint reports.
Abstract: Objective To review and appraise the validity and usefulness of published and preprint reports of prediction models for diagnosing coronavirus disease 2019 (covid-19) in patients with suspected infection, for prognosis of patients with covid-19, and for detecting people in the general population at increased risk of covid-19 infection or being admitted to hospital with the disease. Design Living systematic review and critical appraisal by the COVID-PRECISE (Precise Risk Estimation to optimise covid-19 Care for Infected or Suspected patients in diverse sEttings) group. Data sources PubMed and Embase through Ovid, up to 1 July 2020, supplemented with arXiv, medRxiv, and bioRxiv up to 5 May 2020. Study selection Studies that developed or validated a multivariable covid-19 related prediction model. Data extraction At least two authors independently extracted data using the CHARMS (critical appraisal and data extraction for systematic reviews of prediction modelling studies) checklist; risk of bias was assessed using PROBAST (prediction model risk of bias assessment tool). Results 37 421 titles were screened, and 169 studies describing 232 prediction models were included. The review identified seven models for identifying people at risk in the general population; 118 diagnostic models for detecting covid-19 (75 were based on medical imaging, 10 to diagnose disease severity); and 107 prognostic models for predicting mortality risk, progression to severe disease, intensive care unit admission, ventilation, intubation, or length of hospital stay. The most frequent types of predictors included in the covid-19 prediction models are vital signs, age, comorbidities, and image features. Flu-like symptoms are frequently predictive in diagnostic models, while sex, C reactive protein, and lymphocyte counts are frequent prognostic factors. Reported C index estimates from the strongest form of validation available per model ranged from 0.71 to 0.99 in prediction models for the general population, from 0.65 to more than 0.99 in diagnostic models, and from 0.54 to 0.99 in prognostic models. All models were rated at high or unclear risk of bias, mostly because of non-representative selection of control patients, exclusion of patients who had not experienced the event of interest by the end of the study, high risk of model overfitting, and unclear reporting. Many models did not include a description of the target population (n=27, 12%) or care setting (n=75, 32%), and only 11 (5%) were externally validated by a calibration plot. The Jehi diagnostic model and the 4C mortality score were identified as promising models. Conclusion Prediction models for covid-19 are quickly entering the academic literature to support medical decision making at a time when they are urgently needed. This review indicates that almost all pubished prediction models are poorly reported, and at high risk of bias such that their reported predictive performance is probably optimistic. However, we have identified two (one diagnostic and one prognostic) promising models that should soon be validated in multiple cohorts, preferably through collaborative efforts and data sharing to also allow an investigation of the stability and heterogeneity in their performance across populations and settings. Details on all reviewed models are publicly available at https://www.covprecise.org/. Methodological guidance as provided in this paper should be followed because unreliable predictions could cause more harm than benefit in guiding clinical decisions. Finally, prediction model authors should adhere to the TRIPOD (transparent reporting of a multivariable prediction model for individual prognosis or diagnosis) reporting guideline. Systematic review registration Protocol https://osf.io/ehc47/, registration https://osf.io/wy245. Readers’ note This article is a living systematic review that will be updated to reflect emerging evidence. Updates may occur for up to two years from the date of original publication. This version is update 3 of the original article published on 7 April 2020 (BMJ 2020;369:m1328). Previous updates can be found as data supplements (https://www.bmj.com/content/369/bmj.m1328/related#datasupp). When citing this paper please consider adding the update number and date of access for clarity.

2,183 citations

Journal ArticleDOI
TL;DR: It is shown that magnetic resonance imaging-guided biopsy detects more clinically significant prostate cancer (PCa) and less insignificant PCa compared with systematic biopsy in men at risk for PCa.

334 citations

Journal ArticleDOI
14 Aug 2020-BMJ
TL;DR: This article provides an explanation for the 19 new and modified items of the PRISMA-DTA statement, along with their meaning and rationale.
Abstract: Systematic reviews of diagnostic test accuracy (DTA) studies are fundamental to the decision making process in evidence based medicine. Although such studies are regarded as high level evidence, these reviews are not always reported completely and transparently. Suboptimal reporting of DTA systematic reviews compromises their validity and generalisability, and subsequently their value to key stakeholders. An extension of the PRISMA (preferred reporting items for systematic review and meta-analysis) statement was recently developed to improve the reporting quality of DTA systematic reviews. The PRISMA-DTA statement has 27 items, of which eight are unmodified from the original PRISMA statement. This article provides an explanation for the 19 new and modified items, along with their meaning and rationale. Examples of complete reporting are used for each item to illustrate best practices.

241 citations

Journal ArticleDOI
09 Jul 2021-BMJ Open
TL;DR: TRIPOD-AI as mentioned in this paper is an extension to the TRIPOD statement and the PROBAST (PROBAST-AI) tool to improve the reporting and critical appraisal of prediction model studies for diagnosis and prognosis.
Abstract: Introduction The Transparent Reporting of a multivariable prediction model of Individual Prognosis Or Diagnosis (TRIPOD) statement and the Prediction model Risk Of Bias ASsessment Tool (PROBAST) were both published to improve the reporting and critical appraisal of prediction model studies for diagnosis and prognosis. This paper describes the processes and methods that will be used to develop an extension to the TRIPOD statement (TRIPOD-artificial intelligence, AI) and the PROBAST (PROBAST-AI) tool for prediction model studies that applied machine learning techniques. Methods and analysis TRIPOD-AI and PROBAST-AI will be developed following published guidance from the EQUATOR Network, and will comprise five stages. Stage 1 will comprise two systematic reviews (across all medical fields and specifically in oncology) to examine the quality of reporting in published machine-learning-based prediction model studies. In stage 2, we will consult a diverse group of key stakeholders using a Delphi process to identify items to be considered for inclusion in TRIPOD-AI and PROBAST-AI. Stage 3 will be virtual consensus meetings to consolidate and prioritise key items to be included in TRIPOD-AI and PROBAST-AI. Stage 4 will involve developing the TRIPOD-AI checklist and the PROBAST-AI tool, and writing the accompanying explanation and elaboration papers. In the final stage, stage 5, we will disseminate TRIPOD-AI and PROBAST-AI via journals, conferences, blogs, websites (including TRIPOD, PROBAST and EQUATOR Network) and social media. TRIPOD-AI will provide researchers working on prediction model studies based on machine learning with a reporting guideline that can help them report key details that readers need to evaluate the study quality and interpret its findings, potentially reducing research waste. We anticipate PROBAST-AI will help researchers, clinicians, systematic reviewers and policymakers critically appraise the design, conduct and analysis of machine learning based prediction model studies, with a robust standardised tool for bias evaluation. Ethics and dissemination Ethical approval has been granted by the Central University Research Ethics Committee, University of Oxford on 10-December-2020 (R73034/RE001). Findings from this study will be disseminated through peer-review publications. PROSPERO registration number CRD42019140361 and CRD42019161764.

188 citations

Journal ArticleDOI
TL;DR: Low overall transient and permanent SCI rates are achieved during endovascular thoracic and thoraco-abdominal aortic repair and the use of selective spinal fluid drainage in high risk patients seems justified.

60 citations


Cited by
More filters
01 Jan 2020
TL;DR: Prolonged viral shedding provides the rationale for a strategy of isolation of infected patients and optimal antiviral interventions in the future.
Abstract: Summary Background Since December, 2019, Wuhan, China, has experienced an outbreak of coronavirus disease 2019 (COVID-19), caused by the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). Epidemiological and clinical characteristics of patients with COVID-19 have been reported but risk factors for mortality and a detailed clinical course of illness, including viral shedding, have not been well described. Methods In this retrospective, multicentre cohort study, we included all adult inpatients (≥18 years old) with laboratory-confirmed COVID-19 from Jinyintan Hospital and Wuhan Pulmonary Hospital (Wuhan, China) who had been discharged or had died by Jan 31, 2020. Demographic, clinical, treatment, and laboratory data, including serial samples for viral RNA detection, were extracted from electronic medical records and compared between survivors and non-survivors. We used univariable and multivariable logistic regression methods to explore the risk factors associated with in-hospital death. Findings 191 patients (135 from Jinyintan Hospital and 56 from Wuhan Pulmonary Hospital) were included in this study, of whom 137 were discharged and 54 died in hospital. 91 (48%) patients had a comorbidity, with hypertension being the most common (58 [30%] patients), followed by diabetes (36 [19%] patients) and coronary heart disease (15 [8%] patients). Multivariable regression showed increasing odds of in-hospital death associated with older age (odds ratio 1·10, 95% CI 1·03–1·17, per year increase; p=0·0043), higher Sequential Organ Failure Assessment (SOFA) score (5·65, 2·61–12·23; p Interpretation The potential risk factors of older age, high SOFA score, and d-dimer greater than 1 μg/mL could help clinicians to identify patients with poor prognosis at an early stage. Prolonged viral shedding provides the rationale for a strategy of isolation of infected patients and optimal antiviral interventions in the future. Funding Chinese Academy of Medical Sciences Innovation Fund for Medical Sciences; National Science Grant for Distinguished Young Scholars; National Key Research and Development Program of China; The Beijing Science and Technology Project; and Major Projects of National Science and Technology on New Drug Creation and Development.

4,408 citations

Journal ArticleDOI
TL;DR: The use of risk assessment with MRI before biopsy and MRI‐targeted biopsy was superior to standard transrectal ultrasonography–guided biopsy in men at clinical risk for prostate cancer who had not undergone biopsy previously.
Abstract: BACKGROUND: Multiparametric magnetic resonance imaging (MRI), with or without targeted biopsy, is an alternative to standard transrectal ultrasonography-guided biopsy for prostate-cancer detection in men with a raised prostate-specific antigen level who have not undergone biopsy. However, comparative evidence is limited. METHODS: In a multicenter, randomized, noninferiority trial, we assigned men with a clinical suspicion of prostate cancer who had not undergone biopsy previously to undergo MRI, with or without targeted biopsy, or standard transrectal ultrasonography-guided biopsy. Men in the MRI-targeted biopsy group underwent a targeted biopsy (without standard biopsy cores) if the MRI was suggestive of prostate cancer; men whose MRI results were not suggestive of prostate cancer were not offered biopsy. Standard biopsy was a 10-to-12-core, transrectal ultrasonography-guided biopsy. The primary outcome was the proportion of men who received a diagnosis of clinically significant cancer. Secondary outcomes included the proportion of men who received a diagnosis of clinically insignificant cancer. RESULTS: A total of 500 men underwent randomization. In the MRI-targeted biopsy group, 71 of 252 men (28%) had MRI results that were not suggestive of prostate cancer, so they did not undergo biopsy. Clinically significant cancer was detected in 95 men (38%) in the MRI-targeted biopsy group, as compared with 64 of 248 (26%) in the standard-biopsy group (adjusted difference, 12 percentage points; 95% confidence interval [CI], 4 to 20; P = 0.005). MRI, with or without targeted biopsy, was noninferior to standard biopsy, and the 95% confidence interval indicated the superiority of this strategy over standard biopsy. Fewer men in the MRI-targeted biopsy group than in the standard-biopsy group received a diagnosis of clinically insignificant cancer (adjusted difference, -13 percentage points; 95% CI, -19 to -7; P<0.001). CONCLUSIONS: The use of risk assessment with MRI before biopsy and MRI-targeted biopsy was superior to standard transrectal ultrasonography-guided biopsy in men at clinical risk for prostate cancer who had not undergone biopsy previously. (Funded by the National Institute for Health Research and the European Association of Urology Research Foundation; PRECISION ClinicalTrials.gov number, NCT02380027.)

1,832 citations

Journal ArticleDOI
23 Jan 2018-JAMA
TL;DR: A group of 24 multidisciplinary experts used a systematic review of articles on existing reporting guidelines and methods, a 3-round Delphi process, a consensus meeting, pilot testing, and iterative refinement to develop the PRISMA diagnostic test accuracy guideline.
Abstract: Importance Systematic reviews of diagnostic test accuracy synthesize data from primary diagnostic studies that have evaluated the accuracy of 1 or more index tests against a reference standard, provide estimates of test performance, allow comparisons of the accuracy of different tests, and facilitate the identification of sources of variability in test accuracy. Objective To develop the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) diagnostic test accuracy guideline as a stand-alone extension of the PRISMA statement. Modifications to the PRISMA statement reflect the specific requirements for reporting of systematic reviews and meta-analyses of diagnostic test accuracy studies and the abstracts for these reviews. Design Established standards from the Enhancing the Quality and Transparency of Health Research (EQUATOR) Network were followed for the development of the guideline. The original PRISMA statement was used as a framework on which to modify and add items. A group of 24 multidisciplinary experts used a systematic review of articles on existing reporting guidelines and methods, a 3-round Delphi process, a consensus meeting, pilot testing, and iterative refinement to develop the PRISMA diagnostic test accuracy guideline. The final version of the PRISMA diagnostic test accuracy guideline checklist was approved by the group. Findings The systematic review (produced 64 items) and the Delphi process (provided feedback on 7 proposed items; 1 item was later split into 2 items) identified 71 potentially relevant items for consideration. The Delphi process reduced these to 60 items that were discussed at the consensus meeting. Following the meeting, pilot testing and iterative feedback were used to generate the 27-item PRISMA diagnostic test accuracy checklist. To reflect specific or optimal contemporary systematic review methods for diagnostic test accuracy, 8 of the 27 original PRISMA items were left unchanged, 17 were modified, 2 were added, and 2 were omitted. Conclusions and Relevance The 27-item PRISMA diagnostic test accuracy checklist provides specific guidance for reporting of systematic reviews. The PRISMA diagnostic test accuracy guideline can facilitate the transparent reporting of reviews, and may assist in the evaluation of validity and applicability, enhance replicability of reviews, and make the results from systematic reviews of diagnostic test accuracy studies more useful.

1,616 citations

Journal ArticleDOI
Nicolas Vabret1, Graham J. Britton1, Conor Gruber1, Samarth Hegde1, Joel Kim1, Maria Kuksin1, Rachel Levantovsky1, Louise Malle1, Alvaro Moreira1, Matthew D. Park1, Luisanna Pia1, Emma Risson1, Miriam Saffern1, Bérengère Salomé1, Myvizhi Esai Selvan1, Matthew P. Spindler1, Jessica Tan1, Verena van der Heide1, Jill Gregory1, Konstantina Alexandropoulos1, Nina Bhardwaj1, Brian D. Brown1, Benjamin Greenbaum1, Zeynep H. Gümüş1, Dirk Homann1, Amir Horowitz1, Alice O. Kamphorst1, Maria A. Curotto de Lafaille1, Saurabh Mehandru1, Miriam Merad1, Robert M. Samstein1, Manasi Agrawal, Mark Aleynick, Meriem Belabed, Matthew Brown1, Maria Casanova-Acebes, Jovani Catalan, Monica Centa, Andrew Charap, Andrew K Chan, Steven T. Chen, Jonathan Chung, Cansu Cimen Bozkus, Evan Cody, Francesca Cossarini, Erica Dalla, Nicolas F. Fernandez, John A. Grout, Dan Fu Ruan, Pauline Hamon, Etienne Humblin, Divya Jha, Julia Kodysh, Andrew Leader, Matthew Lin, Katherine E. Lindblad, Daniel Lozano-Ojalvo, Gabrielle Lubitz, Assaf Magen, Zafar Mahmood2, Gustavo Martinez-Delgado, Jaime Mateus-Tique, Elliot Meritt, Chang Moon1, Justine Noel, Timothy O'Donnell, Miyo Ota, Tamar Plitt, Venu Pothula, Jamie Redes, Ivan Reyes Torres, Mark P. Roberto, Alfonso R. Sanchez-Paulete, Joan Shang, Alessandra Soares Schanoski, Maria Suprun, Michelle Tran, Natalie Vaninov, C. Matthias Wilk, Julio A. Aguirre-Ghiso, Dusan Bogunovic1, Judy H. Cho, Jeremiah J. Faith, Emilie K. Grasset, Peter S. Heeger, Ephraim Kenigsberg, Florian Krammer1, Uri Laserson1 
16 Jun 2020-Immunity
TL;DR: The current state of knowledge of innate and adaptive immune responses elicited by SARS-CoV-2 infection and the immunological pathways that likely contribute to disease severity and death are summarized.

1,350 citations

Journal ArticleDOI
01 Nov 2016-BMJ Open
TL;DR: The rationale for each of the 30 items on the STARD 2015 checklist is clarified, and what is expected from authors in developing sufficiently informative study reports is described.
Abstract: Diagnostic accuracy studies are, like other clinical studies, at risk of bias due to shortcomings in design and conduct, and the results of a diagnostic accuracy study may not apply to other patient groups and settings. Readers of study reports need to be informed about study design and conduct, in sufficient detail to judge the trustworthiness and applicability of the study findings. The STARD statement (Standards for Reporting of Diagnostic Accuracy Studies) was developed to improve the completeness and transparency of reports of diagnostic accuracy studies. STARD contains a list of essential items that can be used as a checklist, by authors, reviewers and other readers, to ensure that a report of a diagnostic accuracy study contains the necessary information. STARD was recently updated. All updated STARD materials, including the checklist, are available at http://www.equator-network.org/reporting-guidelines/stard. Here, we present the STARD 2015 explanation and elaboration document. Through commented examples of appropriate reporting, we clarify the rationale for each of the 30 items on the STARD 2015 checklist, and describe what is expected from authors in developing sufficiently informative study reports.

1,217 citations