scispace - formally typeset
Search or ask a question
Author

Lan Wei

Bio: Lan Wei is an academic researcher from University College Dublin. The author has contributed to research in topics: Electroencephalography & Clinical decision support system. The author has an hindex of 1, co-authored 6 publications receiving 7 citations. Previous affiliations of Lan Wei include University of Medicine and Health Sciences & Royal College of Surgeons in Ireland.

Papers
More filters
Journal ArticleDOI
TL;DR: An overall distinct lack of application of XAI is found in the context of CDSS and, in particular, a lack of user studies exploring the needs of clinicians is found.
Abstract: Machine Learning and Artificial Intelligence (AI) more broadly have great immediate and future potential for transforming almost all aspects of medicine. However, in many applications, even outside medicine, a lack of transparency in AI applications has become increasingly problematic. This is particularly pronounced where users need to interpret the output of AI systems. Explainable AI (XAI) provides a rationale that allows users to understand why a system has produced a given output. The output can then be interpreted within a given context. One area that is in great need of XAI is that of Clinical Decision Support Systems (CDSSs). These systems support medical practitioners in their clinic decision-making and in the absence of explainability may lead to issues of under or over-reliance. Providing explanations for how recommendations are arrived at will allow practitioners to make more nuanced, and in some cases, life-saving decisions. The need for XAI in CDSS, and the medical field in general, is amplified by the need for ethical and fair decision-making and the fact that AI trained with historical data can be a reinforcement agent of historical actions and biases that should be uncovered. We performed a systematic literature review of work to-date in the application of XAI in CDSS. Tabular data processing XAI-enabled systems are the most common, while XAI-enabled CDSS for text analysis are the least common in literature. There is more interest in developers for the provision of local explanations, while there was almost a balance between post-hoc and ante-hoc explanations, as well as between model-specific and model-agnostic techniques. Studies reported benefits of the use of XAI such as the fact that it could enhance decision confidence for clinicians, or generate the hypothesis about causality, which ultimately leads to increased trustworthiness and acceptability of the system and potential for its incorporation in the clinical workflow. However, we found an overall distinct lack of application of XAI in the context of CDSS and, in particular, a lack of user studies exploring the needs of clinicians. We propose some guidelines for the implementation of XAI in CDSS and explore some opportunities, challenges, and future research needs.

110 citations

Journal ArticleDOI
TL;DR: In this paper , an explainable machine learning-based clinical decision support system (CDSS) was developed to identify at-risk women in need of targeted pregnancy intervention, which can reduce the rate of Gestational Diabetes Mellitus (GDM) in these women, however, untargeted interventions can be costly and timeconsuming.
Abstract: Gestational Diabetes Mellitus (GDM), a common pregnancy complication associated with many maternal and neonatal consequences, is increased in mothers with overweight and obesity. Interventions initiated early in pregnancy can reduce the rate of GDM in these women, however, untargeted interventions can be costly and time-consuming. We have developed an explainable machine learning-based clinical decision support system (CDSS) to identify at-risk women in need of targeted pregnancy intervention. Maternal characteristics and blood biomarkers at baseline from the PEARS study were used. After appropriate data preparation, synthetic minority oversampling technique and feature selection, five machine learning algorithms were applied with five-fold cross-validated grid search optimising the balanced accuracy. Our models were explained with Shapley additive explanations to increase the trustworthiness and acceptability of the system. We developed multiple models for different use cases: theoretical (AUC-PR 0.485, AUC-ROC 0.792), GDM screening during a normal antenatal visit (AUC-PR 0.208, AUC-ROC 0.659), and remote GDM risk assessment (AUC-PR 0.199, AUC-ROC 0.656). Our models have been implemented as a web server that is publicly available for academic use. Our explainable CDSS demonstrates the potential to assist clinicians in screening at risk patients who may benefit from early pregnancy GDM prevention strategies.

21 citations

Proceedings ArticleDOI
20 Jul 2020
TL;DR: A novel supervised machine learning-based algorithm to detect sleep spindles in infant EEG recordings is presented and has the potential to assist researchers and clinicians in the automated analysis of sleep spindle analysis in infants.
Abstract: Sleep spindles are associated with normal brain development, memory consolidation and infant sleep-dependent brain plasticity and can be used by clinicians in the assessment of brain development in infants. Sleep spindles can be detected in EEG, however, identifying sleep spindles in EEG recordings manually is very time-consuming and typically requires highly trained experts. Research on the automatic detection of sleep spindles in infant EEGs has been limited to-date. In this study, we present a novel supervised machine learning-based algorithm to detect sleep spindles in infant EEG recordings. EEGs collected from 141 ex-term born infants and 6 ex-preterm born infants, recorded at 4 months of age (adjusted), were used to train and test the algorithm. Sleep spindles were annotated by experienced clinical physiologists as the gold standard. The dataset was split into training (81 ex-term), validation (30 ex-term), and testing (30 ex-term + 6 ex-preterm) set. 15 features were selected for input into a random forest algorithm. Sleep spindles were detected in the ex-term infant EEG test set with 92.1% sensitivity and 95.2% specificity. For ex-preterm born infants, the sensitivity and specificity were 80.3% and 91.8% respectively. The proposed algorithm has the potential to assist researchers and clinicians in the automated analysis of sleep spindles in infant EEG.

11 citations

Journal ArticleDOI
TL;DR: In this paper, a random forest-based sleep spindles detection method (Spindle-AI) was proposed to estimate the number and duration of sleep spines in EEG collected from 141 ex-term born infants, recorded at 4 months of age.
Abstract: Objective: Sleep spindle features show developmental changes during infancy and have the potential to provide an early biomarker for abnormal brain maturation. Manual identification of sleep spindles in the electroencephalogram (EEG) is time-consuming and typically requires highly-trained experts. Automated detection of sleep spindles would greatly facilitate this analysis. Research on the automatic detection of sleep spindles in infant EEG has been limited to-date. Methods: We present a random forest-based sleep spindle detection method (Spindle-AI) to estimate the number and duration of sleep spindles in EEG collected from 141 ex-term born infants, recorded at 4 months of age. The signal on channel F4-C4 was split into a training set (81 ex-term) and a validation set (30 ex-term). An additional 30 ex-term infant EEGs (channel F4-C4 and channel F3-C3) were used as an independent test set. Fourteen features were selected for input into a random forest algorithm to estimate the number and duration of spindles and the results were compared against sleep spindles annotated by an experienced clinical physiologist. Results: The prediction of the number of sleep spindles in the independent test set demonstrated 93.3% to 93.9% sensitivity, 90.7% to 91.5% specificity, and 89.2% to 90.1% precision. The duration estimation of sleep spindle events in the independent test set showed a percent error of 5.7% to 7.4%. Conclusion and Significance: Spindle-AI has been implemented as a web server that has the potential to assist clinicians in the fast and accurate monitoring of sleep spindles in infant EEGs.

6 citations

Proceedings ArticleDOI
05 Dec 2020
TL;DR: In this article, an XGBoost-based method was proposed to detect seizures in clinical EEG recordings from the TUH-EEG Corpus, which achieved sensitivity and false alarm/24 hours of 20.00% and 15.59, respectively, in the test set.
Abstract: Epilepsy is one of the most common serious disorders of the brain, affecting about 50 million people worldwide. Electroencephalography (EEG) is an electrophysiological monitoring method which is used to measure tiny electrical changes of the brain, and it is frequently used to diagnose epilepsy. However, the visual annotation of EEG traces is time-consuming and typically requires experienced experts. Therefore, automatic seizure detection can help to reduce the time required to annotate EEGs. Automatic detection of seizures in clinical EEGs has been limited-to date. In this study, we present an XGBoost-based method to detect seizures in EEGs from the TUH-EEG Corpus. 4,597 EEG files were used to train the method, 1,013 EEGs were used as a validation set, and 1,026 EEG files were used to test the method. Sixty-four features were selected as the input to the training set, and Synthetic Minority Over-sampling Technique was used to balance the dataset. Our XGBoost-based method achieved sensitivity and false alarm/24 hours of 20.00% and 15.59, respectively, in the test set. The proposed XGBoost-based method has the potential to help researchers automatically analyse seizures in clinical EEG recordings.

6 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors presented a living review aiming to critically appraise available data about secondary attack rates from people with asymptomatic, pre-symptomatic and symptomatic SARS-CoV-2 infection.

97 citations

Journal ArticleDOI
TL;DR: In this paper , a review of 99 Q1 articles covering explainable artificial intelligence (XAI) techniques is presented, including SHAP, LIME, GradCAM, LRP, Fuzzy classifier, EBM, CBR, and others.

80 citations

Journal ArticleDOI
TL;DR: In this article , the authors conducted a mixed-methods study of user interaction with samples of state-of-the-art AI explainability techniques for digital pathology, revealing challenging dilemmas faced by developers of xAI solutions for medicine and proposing empirically-backed principles for their safer and more effective design.

37 citations

Journal ArticleDOI
TL;DR: In this article, the authors present the most indicative studies with respect to the ML algorithms and data used in cancer research and provide a thorough examination of the clinical scenarios with regards to disease diagnosis, patient classification and cancer prognosis and survival.
Abstract: Artificial Intelligence (AI) has recently altered the landscape of cancer research and medical oncology using traditional Machine Learning (ML) algorithms and cutting-edge Deep Learning (DL) architectures. In this review article we focus on the ML aspect of AI applications in cancer research and present the most indicative studies with respect to the ML algorithms and data used. The PubMed and dblp databases were considered to obtain the most relevant research works of the last five years. Based on a comparison of the proposed studies and their research clinical outcomes concerning the medical ML application in cancer research, three main clinical scenarios were identified. We give an overview of the well-known DL and Reinforcement Learning (RL) methodologies, as well as their application in clinical practice, and we briefly discuss Systems Biology in cancer research. We also provide a thorough examination of the clinical scenarios with respect to disease diagnosis, patient classification and cancer prognosis and survival. The most relevant studies identified in the preceding year are presented along with their primary findings. Furthermore, we examine the effective implementation and the main points that need to be addressed in the direction of robustness, explainability and transparency of predictive models. Finally, we summarize the most recent advances in the field of AI/ML applications in cancer research and medical oncology, as well as some of the challenges and open issues that need to be addressed before data-driven models can be implemented in healthcare systems to assist physicians in their daily practice.

36 citations

Proceedings ArticleDOI
29 Apr 2022
TL;DR: The results indicate a more significant impact of advice when an explanation for the DSS decision is provided, and some insights on how to improve the explanations in the diagnosis forecasts for healthcare assistants, nurses, and doctors are provided.
Abstract: The field of eXplainable Artificial Intelligence (XAI) focuses on providing explanations for AI systems’ decisions. XAI applications to AI-based Clinical Decision Support Systems (DSS) should increase trust in the DSS by allowing clinicians to investigate the reasons behind its suggestions. In this paper, we present the results of a user study on the impact of advice from a clinical DSS on healthcare providers’ judgment in two different cases: the case where the clinical DSS explains its suggestion and the case it does not. We examined the weight of advice, the behavioral intention to use the system, and the perceptions with quantitative and qualitative measures. Our results indicate a more significant impact of advice when an explanation for the DSS decision is provided. Additionally, through the open-ended questions, we provide some insights on how to improve the explanations in the diagnosis forecasts for healthcare assistants, nurses, and doctors.

32 citations