Protocol for a systematic review on the methodological and reporting quality of prediction model studies using machine learning techniques.
Constanza L Andaur Navarro,Constanza L Andaur Navarro,Johanna A A G Damen,Johanna A A G Damen,Toshihiko Takada,Steven W J Nijman,Paula Dhiman,Jie Ma,Gary S. Collins,Ram Bajpai,Richard D Riley,Karel G.M. Moons,Karel G.M. Moons,Lotty Hooft,Lotty Hooft +14 more
Reads0
Chats0
TLDR
This comprehensive systematic review aims to evaluate the reporting quality, the methodological conduct, and the risk of bias of prediction model studies that applied ML techniques for model development and/or validation that used AI or ML techniques.Abstract:
Introduction Studies addressing the development and/or validation of diagnostic and prognostic prediction models are abundant in most clinical domains. Systematic reviews have shown that the methodological and reporting quality of prediction model studies is suboptimal. Due to the increasing availability of larger, routinely collected and complex medical data, and the rising application of Artificial Intelligence (AI) or machine learning (ML) techniques, the number of prediction model studies is expected to increase even further. Prediction models developed using AI or ML techniques are often labelled as a ‘black box’ and little is known about their methodological and reporting quality. Therefore, this comprehensive systematic review aims to evaluate the reporting quality, the methodological conduct, and the risk of bias of prediction model studies that applied ML techniques for model development and/or validation. Methods and analysis A search will be performed in PubMed to identify studies developing and/or validating prediction models using any ML methodology and across all medical fields. Studies will be included if they were published between January 2018 and December 2019, predict patient-related outcomes, use any study design or data source, and available in English. Screening of search results and data extraction from included articles will be performed by two independent reviewers. The primary outcomes of this systematic review are: (1) the adherence of ML-based prediction model studies to the Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD), and (2) the risk of bias in such studies as assessed using the Prediction model Risk Of Bias ASsessment Tool (PROBAST). A narrative synthesis will be conducted for all included studies. Findings will be stratified by study type, medical field and prevalent ML methods, and will inform necessary extensions or updates of TRIPOD and PROBAST to better address prediction model studies that used AI or ML techniques. Ethics and dissemination Ethical approval is not required for this study because only available published data will be analysed. Findings will be disseminated through peer-reviewed publications and scientific conferences. Systematic review registration PROSPERO, CRD42019161764.read more
Citations
More filters
Journal ArticleDOI
Machine learning
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Journal ArticleDOI
Risk of bias in studies on prediction models developed using supervised machine learning techniques: systematic review
Constanza L Andaur Navarro,Johanna A A G Damen,Toshihiko Takada,Steven W J Nijman,Paula Dhiman,Jie Ma,Gary S. Collins,Ram Bajpai,Richard D Riley,Karel G.M. Moons,Lotty Hooft +10 more
TL;DR: In this article, the authors assess the methodological quality of studies on prediction models developed using machine learning techniques across all medical specialties and assess the risk of bias using the prediction risk-of-biased assessment tool (PROBAST).
Journal ArticleDOI
Missing data is poorly handled and reported in prediction model studies using machine learning: a literature review.
Swj Nijman,AM Leeuwenberg,I Beekers,I Verkouter,Jjl Jacobs,M. L. Bots,Folkert W. Asselbergs,Folkert W. Asselbergs,K. G. M. Moons,Tpa Debray,Tpa Debray +10 more
TL;DR: In this article, the authors systematically searched the literature on published papers between 2018 and 2019 about primary studies developing and/or validating clinical prediction models using any supervised ML methodology across medical fields.
Journal ArticleDOI
Missing data is poorly handled and reported in prediction model studies using machine learning: a literature review
TL;DR: In this paper , the authors systematically searched the literature on published papers between 2018 and 2019 about primary studies developing and/or validating clinical prediction models using any supervised ML methodology across medical fields.
References
More filters
Journal ArticleDOI
Research electronic data capture (REDCap)-A metadata-driven methodology and workflow process for providing translational research informatics support
Paul A. Harris,Robert W. Taylor,Robert J. Thielke,Jonathon Payne,Nathaniel Gonzalez,José G. Conde +5 more
TL;DR: Research electronic data capture (REDCap) is a novel workflow methodology and software solution designed for rapid development and deployment of electronic data Capture tools to support clinical and translational research.
Journal ArticleDOI
The PRISMA Statement for Reporting Systematic Reviews and Meta-Analyses of Studies That Evaluate Health Care Interventions: Explanation and Elaboration
Alessandro Liberati,Douglas G. Altman,Jennifer Tetzlaff,Cynthia D. Mulrow,Peter C Gøtzsche,John P. A. Ioannidis,Mike Clarke,Mike Clarke,Philip J. Devereaux,Jos Kleijnen,David Moher +10 more
TL;DR: An Explanation and Elaboration of the PRISMA Statement is presented and updated guidelines for the reporting of systematic reviews and meta-analyses are presented.
Journal ArticleDOI
Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015 statement
David Moher,Larissa Shamseer,Mike Clarke,Davina Ghersi,Alessandro Liberati,Mark Petticrew,Paul G. Shekelle,Lesley A. Stewart +7 more
TL;DR: A reporting guideline is described, the Preferred Reporting Items for Systematic reviews and Meta-Analyses for Protocols 2015 (PRISMA-P 2015), which consists of a 17-item checklist intended to facilitate the preparation and reporting of a robust protocol for the systematic review.
Journal ArticleDOI
Machine learning
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Ítems de referencia para publicar Protocolos de Revisiones Sistemáticas y Metaanálisis: Declaración PRISMA-P 2015 Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015 statement
David Moher,Larissa Shamseer,Michael Clarke,Davina Ghersi,Alessandro Liberati,Mark Petticrew,Lesley A. Stewart +6 more