scispace - formally typeset
Search or ask a question

Showing papers in "Intelligent medicine in 2022"


Journal ArticleDOI
TL;DR: In this article , the authors provide an overview of the core concepts of the attention mechanism built into transformers and other basic components, and review various transformer architectures tailored for medical image applications and discuss their limitations.
Abstract: Transformers have dominated the field of natural language processing and have recently made an impact in the area of computer vision. In the field of medical image analysis, transformers have also been successfully used in to full-stack clinical applications, including image synthesis/reconstruction, registration, segmentation, detection, and diagnosis. This paper aims to promote awareness of the applications of transformers in medical image analysis. Specifically, we first provide an overview of the core concepts of the attention mechanism built into transformers and other basic components. Second, we review various transformer architectures tailored for medical image applications and discuss their limitations. Within this review, we investigate key challenges including the use of transformers in different learning paradigms, improving model efficiency, and coupling with other techniques. We hope this review will provide a comprehensive picture of transformers to readers with an interest in medical image analysis.

46 citations


Journal ArticleDOI
Rui Qiu1
TL;DR: In this paper , a human-guided machine learning framework was adopted to capture public opinions on the vaccines for severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) is unprecedented.
Abstract: Background The current development of vaccines for severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) is unprecedented. Little is known, however, about the nuanced public opinions on the vaccines on social media. Methods We adopted a human-guided machine learning framework using more than six million tweets from almost two million unique Twitter users to capture public opinions on the vaccines for SARS-CoV-2, classifying them into three groups: pro-vaccine, vaccine-hesitant, and anti-vaccine. After feature inference and opinion mining, 10,945 unique Twitter users were included in the study population. Multinomial logistic regression and counterfactual analysis were conducted. Results Socioeconomically disadvantaged groups were more likely to hold polarized opinions on coronavirus disease 2019 (COVID-19) vaccines, either pro-vaccine ( B=0.40,SE=0.08,P<0.001,OR=1.49;95%CI=1.26--1.75 ) or anti-vaccine ( B=0.52,SE=0.06,P<0.001,OR=1.69;95%CI=1.49--1.91 ). People who have the worst personal pandemic experience were more likely to hold the anti-vaccine opinion ( B=-0.18,SE=0.04,P<0.001,OR=0.84;95%CI=0.77--0.90 ). The United States public is most concerned about the safety, effectiveness, and political issues regarding vaccines for COVID-19, and improving personal pandemic experience increases the vaccine acceptance level. Conclusion Opinion on COVID-19 vaccine uptake varies across people of different characteristics.

28 citations


Journal ArticleDOI
Maoning Li1
TL;DR: In this article , the authors reviewed existing studies on applications of AI techniques in combating the COVID-19 pandemic and discussed potential challenges, directions, and open questions, which may provide new insights into addressing the new coronavirus disease 2019 pandemic.
Abstract: The new coronavirus disease 2019 (COVID-19) has become a global pandemic leading to over 180 million confirmed cases and nearly 4 million deaths until June 2021, according to the World Health Organization. Since the initial report in December 2019 , COVID-19 has demonstrated a high transmission rate (with an R0 > 2), a diverse set of clinical characteristics (e.g., high rate of hospital and intensive care unit admission rates, multi-organ dysfunction for critically ill patients due to hyperinflammation, thrombosis, etc.), and a tremendous burden on health care systems around the world. To understand the serious and complex diseases and develop effective control, treatment, and prevention strategies, researchers from different disciplines have been making significant efforts from different aspects including epidemiology and public health, biology and genomic medicine, as well as clinical care and patient management. In recent years, artificial intelligence (AI) has been introduced into the healthcare field to aid clinical decision-making for disease diagnosis and treatment such as detecting cancer based on medical images, and has achieved superior performance in multiple data-rich application scenarios. In the COVID-19 pandemic, AI techniques have also been used as a powerful tool to overcome the complex diseases. In this context, the goal of this study is to review existing studies on applications of AI techniques in combating the COVID-19 pandemic. Specifically, these efforts can be grouped into the fields of epidemiology, therapeutics, clinical research, social and behavioral studies and are summarized. Potential challenges, directions, and open questions are discussed accordingly, which may provide new insights into addressing the COVID-19 pandemic and would be helpful for researchers to explore more related topics in the post-pandemic era.

14 citations


Journal ArticleDOI
TL;DR: In this article , the authors present several approaches to investigate the application of multiple algorithms based on machine learning (ML) approach and biosensors for early breast cancer detection, which is a widely occurring cancer in women worldwide and is related to high mortality.
Abstract: Breast cancer is a widely occurring cancer in women worldwide and is related to high mortality. The objective of this review was to present several approaches to investigate the application of multiple algorithms based on machine learning (ML) approach and biosensors for early breast cancer detection. Automation is needed because biosensors and ML are needed to identify cancers based on microscopic images. ML aims to facilitate self-learning in computers. Rather than relying on explicit pre-programmed rules and models, it is based on identifying patterns in observed data and building models to predict outcomes. We have compared and analysed various types of algorithms such as fuzzy extreme learning machine – radial basis function (ELM-RBF), support vector machine (SVM), support vector regression (SVR), relevance vector machine (RVM), naive bayes, k-nearest neighbours algorithm (K-NN), decision tree (DT), artificial neural network (ANN), back-propagation neural network (BPNN), and random forest across different databases including images digitized from fine needle aspirations of breast masses, scanned film mammography, breast infrared images, MR images, data collected by using blood analyses, and histopathology image samples. The results were compared on performance metric elements like accuracy, precision, and recall. Further, we used biosensors to determine the presence of a specific biological analyte by transforming the cellular constituents of proteins, DNA, or RNA into electrical signals that can be detected and analysed. Here, we have compared the detection of different types of analytes such as HER2, miRNA 21, miRNA 155, MCF-7 cells, DNA, BRCA1, BRCA2, human tears, and saliva by using different types of biosensors including FET, electrochemical, and sandwich electrochemical, among others. Several biosensors use a different type of specification which is also discussed. The result of which is analysed on the basis of detection limit, linear ranges, and response time. Different studies and related articles were reviewed and analysed systematically, and those published from 2010 to 2021 were considered. Biosensors and ML both have the potential to detect breast cancer quickly and effectively.

13 citations


Journal ArticleDOI
TL;DR: In this article , a study was conducted to classify patients suspected of hepatitis C infection using different classification models, which included support vector machine (SVM), Gaussian Naïve Bayes (NB), decision tree (DT), random forest (RF), logistic regression (LR), and KNN algorithm.
Abstract: Hepatitis C virus (HCV) has a high prevalence worldwide, and the progression of the disease can cause irreversible damage to severe liver damage or even death. Therefore, developing prediction models using machine learning techniques is beneficial. This study was conducted to classify patients suspected of HCV infection using different classification models. The study was conducted using a dataset derived from the University of California, Irvine (UCI) Machine Learning Repository. Since the HCV dataset was imbalanced, the synthetic minority oversampling technique (SMOTE) was applied to balance the dataset. After cleaning the dataset, it was divided into training and test data for developing six classification models. These six algorithms included the support vector machine (SVM), Gaussian Naïve Bayes (NB), decision tree (DT), random forest (RF), logistic regression (LR), and K-nearest neighbors (KNN) algorithm. The Python programming language was used to develop the classifiers. Receiver operating characteristic curve analysis and other metrics were used to evaluate the performance of the proposed models. After the evaluation of the models using different metrics, the RF classifier had the best performance among the other methods. The accuracy of the RF classifier was 97.29%. Accordingly, the area under the curve (AUC) for LR, KNN, DT, SVM, Gaussian NB, and RF models were 0.921, 0.963, 0.953,0.972, 0.896, and 0.998, respectively, with RF showing the best predictive performance. Various machine learning techniques for classifying patients as healthy and unhealthy were used in this study. The developed models can identify the stage of HCV based on trained data.

10 citations


Journal ArticleDOI
TL;DR: In this article , the authors summarize systematic reviews addressing the effect of mobile health technology on the outcome of patients with chronic diseases, and describe the current applications of various mHealth approaches, evaluate their effectiveness as well as limitations, and discuss potential challenges in their future development.
Abstract: The successful control of chronic diseases mainly depends on how well patients manage their disease conditions with the aid of healthcare providers. Mobile health technology—also known as mHealth—supports healthcare practice by means of mobile devices such as smartphone applications, web-based technologies, telecommunications services, social media, and wearable technology, and is becoming increasingly popular. Many studies have evaluated the utility of mHealth as a tool to improve chronic disease management through monitoring and feedback, educational and lifestyle interventions, clinical decision support, medication adherence, risk screening, and rehabilitation support. The aim of this article is to summarize systematic reviews addressing the effect of mHealth on the outcome of patients with chronic diseases. We describe the current applications of various mHealth approaches, evaluate their effectiveness as well as limitations, and discuss potential challenges in their future development. The evidence to date indicates that none of the existing mHealth technologies are inferior to traditional care. Telehealth and web-based technologies are the most frequently reported interventions, with promising results ranging from alleviation of disease-related symptoms, improvement in medication adherence, and decreased rates of rehospitalization and mortality. The new generation of mHealth devices based on various technologies are likely to provide more efficient and personalized healthcare programs for patients.

9 citations


Journal ArticleDOI
TL;DR: A deep ensemble model based on image-level labels for the binary classification of benign and malignant lesions of breast histopathological images has outperformed many existing approaches in accuracy, providing a method for the auxiliary medical diagnosis.
Abstract: Background: Breast cancer has the highest prevalence in women globally. The classification and diagnosis of breast cancer and its histopathological images have always been a hot spot of clinical concern. In Computer-Aided Diagnosis (CAD), traditional classification models mostly use a single network to extract features, which has significant limitations. On the other hand, many networks are trained and optimized on patient-level datasets, ignoring the application of lower-level data labels. Method: This paper proposes a deep ensemble model based on image-level labels for the binary classification of benign and malignant lesions of breast histopathological images. First, the BreaKHis dataset is randomly divided into a training, validation and test set. Then, data augmentation techniques are used to balance the number of benign and malignant samples. Thirdly, considering the performance of transfer learning and the complementarity between each network, VGG16, Xception, ResNet50, DenseNet201 are selected as the base classifiers. Result: In the ensemble network model with accuracy as the weight, the image-level binary classification achieves an accuracy of $98.90\%$. In order to verify the capabilities of our method, the latest Transformer and Multilayer Perception (MLP) models have been experimentally compared on the same dataset. Our model wins with a $5\%-20\%$ advantage, emphasizing the ensemble model's far-reaching significance in classification tasks. Conclusion: This research focuses on improving the model's classification performance with an ensemble algorithm. Transfer learning plays an essential role in small datasets, improving training speed and accuracy. Our model has outperformed many existing approaches in accuracy, providing a method for the field of auxiliary medical diagnosis.

8 citations


Journal ArticleDOI
TL;DR: In this article , a computerized automatic macular edema grading model is constructed using a senet154 convolutional neural network embedded within the Squeeze-and-Excitation module, optimizing the algorithm to use the imbalanced public data set Messidor and drawing class activation maps to aid in diagnosis.
Abstract: Diabetic macular edema is one of the main causes of visual impairment in patients with diabetic retinopathy. As the number of patients with diabetes increases, so will the number of patients with diabetic macular edema. Early screening of patients for macular edema can provide timely and scientific clinical diagnosis and treatment. In this paper, we take fundus images of diabetic retinopathy patients as the processing object and use artificial intelligence technology to construct an automatic macular edema classification model, in order to achieve low-cost and rapid fundus image classification. This can be considered beneficial for the screening of macular edema patients on a large scale. In this paper, a computerized automatic macular edema grading model is constructed using a senet154 convolutional neural network embedded within the Squeeze-and-Excitation module, optimizing the algorithm to use the imbalanced public data set Messidor and drawing class activation maps to aid in diagnosis. The AUCs of macular edema risk grades 0, 1, and 2 were 0.965, 0.881, and 0.963, respectively. Class activation mappings correctly mark focal areas for macular edema risk classification in fundus images. The constructed grading model showed a good recognition rate for fundus image variations caused by diabetic retinopathy. These results are of certain theoretical and practical significance for the auxiliary diagnosis of macular edema risk grades.

5 citations


Journal ArticleDOI
TL;DR: In this paper , a clinical cybernetic system is proposed to host the prediction machine, which allows for a human-machine collaborative interaction and an enhanced decision support platform to augment overall care strategies.
Abstract: Cervical cancer is a prominent disease in women, with a high mortality rate worldwide. This cancer continues to be a challenge to concisely diagnose, especially in its early stages. The aim of this study was to propose a unique cybernetic system which showcased the human-machine collaboration forming a superintelligence framework that ultimately allowed for greater clinical care strategies. In this work, we applied machine learning (ML) models on 650 patients’ data collected from Hospital Universitario de Caracas in Caracas, Venezuela, where ethical approval and informed consent were granted. The data were hosted at the University of California at Irvine (UCI) database for cancer prediction by using data purely from a patient questionnaire that include key cervical cancer drivers such as questions on sexually transmitted diseases and time since first intercourse in order to design a clinical prediction machine that can predict various stages of cervical cancer. Two contrasting methods are explored in the design of a ML-driven prediction machine in this study, namely, a probabilistic method using Gaussian mixture models (GMM), and fuzziness-based reasoning using the fuzzy c-means (FCM) clustering on the data from 650 patients. The models were validated using a K-Fold validation method, and the results show that both methods could be feasibly deployed in a clinical setting, with the probabilistic method (produced accuracies of 80+%/classifier dependent) allowing for more detail in the grading of a potential cervical cancer prediction, albeit at the cost of greater computation power; the FCM approach (produced accuracies around 90+%/classifier dependent) allows for a more parsimonious modelling with a slightly reduced prediction depth in comparison. As part of the novelty of this work, a clinical cybernetic system is also proposed to host the prediction machine, which allows for a human-machine collaborative interaction and an enhanced decision support platform to augment overall care strategies. The present study showcased how the use of prediction machines can contribute towards early detection and prioritised care of patients with cervical cancer, while also allowing for cost-saving benefits when compared with routine cervical cancer screening. Further work in this area would now involve additional validation of the proposed clinical cybernetic loop and further improvement to the prediction machine by exploring non-linear dimensional embedding and clustering methods.

5 citations


Journal ArticleDOI
TL;DR: Zhang et al. as mentioned in this paper applied a novel technique that provides the ability to combine both supervised and self-supervised loss terms, and in doing so eliminates the drawback of each technique, which enables the estimation of edge-preserving depth maps from a single untextured arthroscopic frame.
Abstract: : Lack of depth perception from medical imaging systems is one of the long-standing technological limitations of minimally invasive surgeries. The ability to visualize anatomical structures in 3D can improve conventional arthroscopic surgeries, as a full 3D semantic representation of the surgical site can directly improve surgeons’ ability. It also brings the possibility of intraoperative image registration with pre-operative clinical records for the development of semi-autonomous, and fully autonomous platforms. : Depth estimation and segmentation processes of feature- and texture-less tissue structures is an extremely challenging task. The lack of accurate ground-truth depth data, non-even lighting conditions with the occurrences of intra-frame over- and under-exposed regions, and occlusions limit the application of stereo vision and monocular supervised learning techniques inside the joint space. On the other hand, unsupervised/self-supervised monocular depth estimation techniques suffer from depth discontinuity, poor depth estimation in texture-less regions, no association with depth scale, and poor depth gradients are common. The provision of fully segmented 3D maps solves the grand visualization challenge of knee arthroscopy, and our method is widely applicable to other forms of minimally invasive surgeries and 3D reconstruction of medical images in general. We apply a novel technique that provides the ability to combine both supervised and self-supervised loss terms, and in doing so eliminates the drawback of each technique. It enables the estimation of edge-preserving depth maps from a single untextured arthroscopic frame. The proposed image acquisition technique projects artificial textures on the surface to improve the quality of disparity maps from stereo images. Moreover, integration of attention-ware multi-scale feature extraction technique along with scene global contextual constraints and multiscale depth fusion, the model able to predict reliable and accurate tissue depth of the surgical sites that complies with scene geometry. : A total of 4128 stereo frames from a knee phantom were used to train a network, and during the pre-trained stage, the network learns disparity maps from the stereo images. The fine-tuned training phase uses 12,695 knee arthroscopic stereo frames from cadaver experiments along with their corresponding coarse disparity maps obtained from the stereo matching technique. In a supervised fashion, the network learns the left image to the disparity map transformation process, whereas the self-supervised loss term refines the coarse depth map by minimizing reprojection, gradients, and structural dissimilarity loss. Together, our method produces high-quality 3D maps with minimum re-projection loss that are 0.0004132 (structural similarity index), 0.00036120156 (L1 error distance) and 6.591908e-05 (L1 gradient error distance). : Machine learning techniques for monocular depth prediction is studied to infer accurate depth maps from a single-color arthroscopic video frame. Moreover, the study integrates segmentation model hence, 3D segmented maps were inferred that provides extended perception ability and tissue awareness.

4 citations


Journal ArticleDOI
TL;DR: In this paper , a dual track clinical validation study was designed to assess the clinical accuracy of deep learning-based radiological image analysis for COVID-19 diagnosis in resource-limited settings.
Abstract: Deep learning-based radiological image analysis could facilitate use of chest x-rays as a triaging tool for COVID-19 diagnosis in resource-limited settings. This study sought to determine whether a modified commercially available deep learning algorithm (M-qXR) could risk stratify patients with suspected COVID-19 infections.A dual track clinical validation study was designed to assess the clinical accuracy of M-qXR. The algorithm evaluated all Chest-X-rays (CXRs) performed during the study period for abnormal findings and assigned a COVID-19 risk score. Four independent radiologists served as radiological ground truth. The M-qXR algorithm output was compared against radiological ground truth and summary statistics for prediction accuracy were calculated. In addition, patients who underwent both PCR testing and CXR for suspected COVID-19 infection were included in a co-occurrence matrix to assess the sensitivity and specificity of the M-qXR algorithm.625 CXRs were included in the clinical validation study. 98% of total interpretations made by M-qXR agreed with ground truth (p = 0.25). M-qXR correctly identified the presence or absence of pulmonary opacities in 94% of CXR interpretations. M-qXR's sensitivity, specificity, PPV, and NPV for detecting pulmonary opacities were 94%, 95%, 99%, and 88% respectively. M-qXR correctly identified the presence or absence of pulmonary consolidation in 88% of CXR interpretations (p = 0.48). M-qXR's sensitivity, specificity, PPV, and NPV for detecting pulmonary consolidation were 91%, 84%, 89%, and 86% respectively. Furthermore, 113 PCR-confirmed COVID-19 cases were used to create a co-occurrence matrix between M-qXR's COVID-19 risk score and COVID-19 PCR test results. The PPV and NPV of a medium to high COVID-19 risk score assigned by M-qXR yielding a positive COVID-19 PCR test result was estimated to be 89.7% and 80.4% respectively.M-qXR was found to have comparable accuracy to radiological ground truth in detecting radiographic abnormalities on CXR suggestive of COVID-19.

Journal ArticleDOI
TL;DR: In this article , a critical review of select state-of-the-art machine learning techniques used to detect skin cancer is presented, and an analysis of the performance of k-nearest neighbors, support vector machine, and convolutional neural networks algorithms on benchmark datasets is conducted.
Abstract: Skin cancer is among the most common and lethal cancer types, with the number of cases increasing dramatically worldwide. If not diagnosed in the nascent stages, it can lead to metastases, resulting in high mortality rates. Skin cancer can be cured if detected early. Consequently, timely and accurate diagnosis of such cancers is currently a key research objective. Various machine learning technologies have been employed in computer-aided diagnosis of skin cancer detection and malignancy classification. Machine learning is a subfield of artificial intelligence (AI) involving models and algorithms which can learn from data and generate predictions on previously unseen data. The traditional biopsy method is applied to diagnose skin cancer, which is a tedious and expensive procedure. Alternatively, machine learning algorithms for cancer diagnosis can aid in its early detection, lowering the workload of specialists while simultaneously enhancing skin lesion diagnostics. This article presents a critical review of select state-of-the-art machine learning techniques used to detect skin cancer. Several studies have been collected, and an analysis of the performance of k-nearest neighbors, support vector machine, and convolutional neural networks algorithms on benchmark datasets was conducted. The shortcomings and disadvantages of each algorithm are briefly discussed. Challenges in detecting skin cancer are highlighted and the scope for future research is proposed.

Journal ArticleDOI
TL;DR: In this article , the authors present a review of the development of ICD coding from manual to automated work, from the rules-based stage, through the traditional machine learning stage, to the neural-network based stage.
Abstract: The International Classification of Diseases (ICD) is an international standard and tool for epidemiological investigation, health management, and clinical diagnosis with a fundamental role in intelligent medical care. The assignment of ICD codes to health-related documents has become a focus of academic research, and numerous studies have developed the process of ICD coding from manual to automated work. In this survey, we review the developmental history of this task in recent decades in depth, from the rules-based stage, through the traditional machine learning stage, to the neural-network-based stage. Various methods have been introduced to solve this problem by using different techniques, and we report a performance comparison of different methods on the publicly available Medical Information Mart for Intensive Care dataset. Next, we summarize four major challenges of this task: (1) the large label space, (2) the unbalanced label distribution, (3) the long text of documents, and (4) the interpretability of coding. Various solutions that have been proposed to solve these problems are analyzed. Further, we discuss the applications of ICD coding, from mortality statistics to payments based on disease-related groups and hospital performance management. In addition, we discuss different ways of considering and evaluating this task, and how it has been transformed into a learnable problem. We also provide details of the commonly used datasets. Overall, this survey aims to provide a reference and possible prospective directions for follow-up research work.

Journal ArticleDOI
TL;DR: In this paper , the authors used the MobileNet-V2 Convolutional Neural Network (CNN) for the detection of patent ductus arteriosus (PDA) in echocardiogram video clips.
Abstract: Patent ductus arteriosus (PDA) is a common form of congenital heart disease, especially in preterm infants. A PDA can be associated with prolonged ventilator dependence and increased risk of severe lung disease, necrotizing enterocolitis, impaired renal function, intraventricular hemorrhage, and death. The problem of caring for neonates with a PDA is difficult, and the use of artificial intelligence (AI) to aid in PDA detection can assist in its management. A clinical database was searched for echocardiograms performed in the Neonatal Intensive Care Unit (NICU) at the Children's Hospital of Orange County (CHOC) from 2017 to 21. A total of 461 de-identified echocardiogram video clips across 300 patients from CHOC were analyzed. Our goal was to explore the efficacy of the Convolutional Neural Network (CNN) for PDA detection in echocardiogram video clips for eventual clinical deployment on an edge-based device. To this end, we used the light-weight MobileNet-V2 CNN architecture for training and testing. Of the 461 echocardiogram video clips, 316 were used for training, 74 for validation, and 72 for testing. Video frames were extracted from each clip and processed by the CNN. The CNN treated the frames as independent images and performed binary (Normal vs PDA) classification on each video clip. Of the 461 echocardiogram video clips analyzed, 272 contained an identifiable PDA and 190 were considered normal. Our CNN algorithm achieved notable results for identifying the presence of a PDA, with 0.88 Area Under Curve (AUC), 0.84 Positive Predictive Value (PPV), 0.80 Negative Predictive Value (NPV), 0.76 Sensitivity, and 0.87 Specificity on the test data. Results indicate that diagnosis of PDA within an edge-based AI framework is feasible. Future work will involve augmenting the echocardiogram dataset, expanding the analysis to include PDA classification based on size and hemodynamic significance, and exploring additional algorithmic approaches.

Journal ArticleDOI
TL;DR: Deep learning approaches, especially convolutional neural networks, have become the method of choice in the field of medical image analysis over the last few years as mentioned in this paper , which is attributed to their excellent abilities of learning features in a more effective and efficient manner, not only for 2D/3D images in the Euclidean space, but also for meshes and graphs in non-Euclidean spaces, such as the cortical surfaces in neuroimaging analysis field.
Abstract: Deep learning approaches, especially convolutional neural networks, have become the method of choice in the field of medical image analysis over the last few years. This prevalence is attributed to their excellent abilities of learning features in a more effective and efficient manner, not only for 2D/3D images in the Euclidean space, but also for meshes and graphs in non-Euclidean space, such as the cortical surfaces in neuroimaging analysis field. The brain cerebral cortex is a highly convoluted and thin sheet of gray matter, which is thus typically represented by triangular surface meshes with an intrinsic spherical topology for each hemisphere. Accordingly, novel tailored deep learning methods have been developed for cortical surface-based analysis of neuroimaging data. This paper reviews the representative deep learning techniques relevant to cortical surface-based analysis and summarizes recent major contributions to the field. Specifically, we have surveyed the use of deep learning techniques for cortical surface reconstruction, registration, parcellation, prediction, and other applications. We conclude by discussing the open challenges, limitations, and potentials of these techniques, and suggesting directions for future research.

Journal ArticleDOI
TL;DR: In this paper , different hypotheses for ketamine's mechanism of action including direct inhibition and disinhibition of NMDA receptors, AMPAR activation, and heightened activation of monoaminergic systems are reviewed.
Abstract: Ketamine, a noncompetitive NMDA receptor antagonist, has been exclusively used as an anesthetic in medicine and has led to new insights into the pathophysiology of neuropsychiatric disorders. Clinical studies have shown that low subanesthetic doses of ketamine produce antidepressant effects for individuals with depression. However, its use as a treatment for psychiatric disorders has been limited due to its reinforcing effects and high potential for diversion and misuse. Preclinical studies have focused on understanding the molecular mechanisms underlying ketamine's antidepressant effects, but a precise mechanism had yet to be elucidated. Here we review different hypotheses for ketamine's mechanism of action including the direct inhibition and disinhibition of NMDA receptors, AMPAR activation, and heightened activation of monoaminergic systems. The proposed mechanisms are not mutually exclusive, and their combined influence may exert the observed structural and functional neural impairments. Long term use of ketamine induces brain structural, functional impairments, and neurodevelopmental effects in both rodents and humans. Its misuse has increased rapidly in the past 20 years and is one of the most common addictive drugs used in Asia. The proposed mechanisms of action and supporting neuroimaging data allow for the development of tools to identify 'biotypes' of ketamine use disorder (KUD) using machine learning approaches, which could inform intervention and treatment.

Journal ArticleDOI
TL;DR: In this article , a deep learning model was developed and validated for detecting left ventricular dysfunction based on a standard 12-lead ECG, which largely depends on the availability of digital ECG data: 10 seconds for all 12 leads sampled at 500 hertz.
Abstract: Recently, a deep learning model was developed and validated for detecting left ventricular dysfunction based on a standard 12-lead ECG. However, this model largely depends on the availability of digital ECG data: 10 seconds for all 12 leads sampled at 500 hertz stored as a numeric array. This limits the ability to validate or scale this technology to institutions that store ECGs as PDF or image files (“paper” ECGs). Methods do exist to create digital signals from the archived paper copies of the ECGs. The primary objective of this study was to evaluate how well the AI-ECG model output obtained using digitized paper ECGs agreed with the predictions from the native digital ECGs for the detection of low ejection fraction. To address this objective, deep learning models that utilizes digitized data from a 12-lead ECG snapshot were needed. Two models were evaluated, Model A using data from a single lead with full 10-second recording (lead II) only and Model B using data from 3 leads with 10-second recordings (leads II, V1 and V5) in addition to 9 leads with partial (2.5-second) recordings. In a test sample of 10 patients with varying ECG features, Models A and B obtained intraclass correlation coefficients of 0.95 (95% CI: 0.82 to 0.99) and 0.58 (95% CI: 0.00 to 0.87). In an exploratory examination of model diagnostic performance to detect low ejection fraction, Model A achieved and AUC of 0.71 while Model B achieved an AUC of 0.91. Our study demonstrates an agreement between deep learning model predictions obtained from digitized paper-based ECGs and native digital ECGs and provides some insight into potential expandability of ECG-based deep learning models including the importance of captured duration (10-second vs. 2-5-second recordings) and ECG vectors (precordial leads vs. limb leads).

Journal ArticleDOI
TL;DR: In this paper , the authors proposed a kernel-based statistical framework for identifying topological differences in brain networks, where graph kernels are embedded into maximum mean discrepancy for calculating kernel based test statistic and conditional Monte Carlo simulation is adopted to compute the statistical significance and statistical power.
Abstract: Brain network describing interconnections between brain regions contains abundant topological information. It is a challenge for the existing statistical methods (e.g., t test) to investigate the topological differences of brain networks. We proposed a kernel based statistic framework for identifying topological differences in brain networks. In our framework, the topological similarities between paired brain networks were measured by graph kernels. Then, graph kernels are embedded into maximum mean discrepancy for calculating kernel based test statistic. Based on this test statistic, we adopted conditional Monte Carlo simulation to compute the statistical significance (i.e., P value) and statistical power. We recruited 33 patients with Alzheimer's disease (AD), 33 patients with early mild cognitive impairment (EMCI), 33 patients with late mild cognitive impairment (LMCI) and 33 normal controls (NC) in our experiment. There are no statistical differences in demographic information between patients and NC. The compared state-of-the-art statistical methods include t test, t squared test, two-sample permutation test and non-normal test. We applied the proposed shortest path matched kernel to our framework for investigating the statistical differences of shortest path topological structures in brain networks of AD and NC. We compared our method with the existing state-of-the-art statistical methods in brain network characteristic including clustering coefficient and functional connection among EMCI, LMCI, AD, and NC. The results indicate that our framework can capture the statistically discriminative shortest path topological structures, such as shortest path from right rolandic operculum to right supplementary motor area (P = 0.00314, statistical power = 0.803). In clustering coefficient and functional connection, our framework outperforms the state-of-the-art statistical methods, such as P = 0.0013 and statistical power = 0.83 in the analysis of AD and NC. Our proposed kernel based statistic framework not only can be used to investigate the topological differences of brain network, but also can be used to investigate the static characteristics (e.g., clustering coefficient and functional connection) of brain network.

Journal ArticleDOI
TL;DR: In this article , a deep learning algorithm comprising three-dimensional convolutional neural networks (3D CNNs) was proposed to provide an approach for monitoring handwashing compliance and quality in hospitals and communities.
Abstract: Hand hygiene can be a simple, inexpensive, and effective method for preventing the spread of infectious diseases. However, a reliable and consistent method for monitoring adherence to the guidelines within and outside healthcare settings is challenging. The aim of this study was to provide an approach for monitoring handwashing compliance and quality in hospitals and communities. We proposed a deep learning algorithm comprising three-dimensional convolutional neural networks (3D CNNs) and used 230 standard handwashing videos recorded by healthcare professionals in the hospital or at home for training and internal validation. An assessment scheme with a probability smoothing method was also proposed to optimize the neural network's output to identify the handwashing steps, measure the exact duration, and grade the standard level of recognized steps. Twenty-two videos by healthcare professionals in another hospital and 28 videos recorded by civilians in the community were used for external validation. Using a deep learning algorithm and an assessment scheme, combined with a probability smoothing method, each handwashing step was recognized (ACC ranged from 90.64% to 98.87% in the hospital and from 87.39% to 96.71% in the community). An assessment scheme measured each step's exact duration, and the intraclass correlation coefficients were 0.98 (95% CI: 0.97–0.98) and 0.91 (95% CI: 0.88–0.93) for the total video duration in the hospital and community, respectively. Furthermore, the system assessed the quality of handwashing, similar to the expert panel (kappa = 0.79 in the hospital; kappa = 0.65 in the community). This work developed an algorithm to directly assess handwashing compliance and quality from videos, which is promising for application in healthcare settings and communities to reduce pathogen transmission.

Journal ArticleDOI
TL;DR: In this article , the authors summarized the development status of telemedicine at home and abroad, the application of tele-medicines as well as the feasibility and limitations of its promotion and development, and put forward an outlook for the future development.
Abstract: With the continuous improvement and development of modern network information technology and the continuous improvement of people's demand for healthcare, the traditional healthcare model has evolved, giving birth to a new telemedicine healthcare model. Telemedicine refers to the comprehensive application of information technology for medical information transmission and long-distance communication between different places. It integrates medicine, computer technology, and communication technology, including remote monitoring, remote diagnosis, remote consultation, remote case discussion, remote teaching, remote surgery, and a series of medical activities. With the continuous development of communication technology, telemedicine is also constantly changing. As a relatively novel technology, telemedicine has been sought after by major hospitals. With the advancement of Internet technology, digitization and informationalization have been gradually applied in telemedicine, but due to various factors, telemedicine still has great limitations. This paper summarises the development status of telemedicine; discusses in detail the development of telemedicine at home and abroad; the application of telemedicine as well as the feasibility and limitations of its promotion and development; and puts forward an outlook for the future development of telemedicine.

Journal ArticleDOI
TL;DR: In this article , an artificial intelligence (AI)-based ROSE model using deep-learning convolutional neural network (CNN) technique to assist in classifying cytologic whole-slide images (WSIs) as malignant or benign.
Abstract: Cytological rapid on-site evaluation (ROSE) is becoming an integral technique for improving the performance of bronchoscopic examinations by confirming specimenadequacy and accuracy in real-time. However, the time- and personnel-consuming nature of ROSE limits its application. We constructed an artificial intelligence (AI)-based ROSE model using deep-learning convolutional neural network (CNN) technique to assist in classifying cytologic whole-slide images (WSIs) as malignant or benign. A total of 627 patients with ROSE slides were enrolled, among whom 374 and 91 patients were included and randomly assigned into training and validation groups, respectively. Another 162 patients were selected as a testing group. The malignant-benign classification results of the test group were compared between cytopathologists' results and AI-based ROSE model results. Actual ROSE reports of the test group given on-site were considered as results of junior cytopathologists; the official cytological diagnostic reports of the test group, which were given without time pressure and with reference to more clinical and pathological information by the senior cytopathologist, were considered as results of the senior cytopathologist. The real-world comprehensive diagnosis was considered as the gold standard. The area under the ROC curve (AUC) achieved 0.9846 in the validation group at patch-level. The accuracy achieved by one senior cytopathologist, two junior cytopathologists and the AI-based ROSE model were 96.90%, 83.30%, and 84.57%, respectively. This AI-based ROSE model may have the potential to support the diagnosis and therapeutic management of patients with respiratory lesions.

Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper used diffusion tensor imaging (DTI) and functional magnetic resonance imaging (fMRI) to map the human brain and proposed that neuroimaging studies could be used in treatment of sleep disorders in Type 2 diabetes mellitus.
Abstract: Type 2 diabetes mellitus (T2DM) and sleep disorders (SD) have become important and costly health issues worldwide, particularly in China. Both are common diseases related to brain functional and structural abnormalities involving the hypothalamic-pituitary-adrenal (HPA) axis. The brains of individuals who suffer from both diseases simultaneously might be different compared to healthy individuals. This review assessed current neuroimaging findings to develop alternative targeted treatments for T2DM and SD. Relevant articles published between January 2002 and September 2021 were searched in PubMed and Web of Science databases. Generalized treatment methods for T2DM include dietary/weight-loss management, metformin or a combination of two non-insulin drugs, and melatonin for SD, though alternative therapies including electroacupuncture (EA) have been utilized in treating both of these diseases separately because they are convenient, affordable, and safe. Standard and alternative treatments for T2DM were somehow effective in treating SD. Neuroimaging studies of these disorders can achieve higher treatment efficacy by targeting brain areas, such as the hypothalamus (HYP), as visualized via diffusion tensor imaging (DTI), and functional magnetic resonance imaging (fMRI). DTI and fMRI can map the human brain and are utilized in many experiments. Thus, we propose that neuroimaging studies could be used in treatment of SD in T2DM.

Journal ArticleDOI
TL;DR: In this paper , the authors described the information technology and artificial intelligence support in management experiences of the pediatric designated hospital in the wave of COVID-19 in Shanghai and illustrated the role of the information system through the number and prognosis of patients treated.
Abstract: To describe the information technology and artificial intelligence support in management experiences of the pediatric designated hospital in the wave of COVID-19 in Shanghai. We retrospectively concluded the management experiences at the largest pediatric designated hospital from March 1st to May 11th in 2022 in Shanghai. We summarized the application of Internet hospital, face recognition technology in outpatient department, critical illness warning system and remote consultation system in the ward and the structed electronic medical record in the inpatient system. We illustrated the role of the information system through the number and prognosis of patients treated. The COVID-19 designated hospitals were built particularly for critical patients requiring high-level medical care, responded quickly and scientifically to prevent and control the epidemic situation. From March 1st to May 11th, 2022, we received and treated 768 children confirmed by positive RT-PCR and treated at our center. In our management, we use Internet Information on the Internet Hospital, face recognition technology in outpatient department, critical illness warning system and remote consultation system in the ward, structed electronic medical record in the inpatient system. No deaths or nosocomial infections occurred. The number of offline outpatient visits dropped, from March to May 2022, 146,106, 48,379, 57,686 respectively. But the outpatient volume on the internet hospital increased significantly (3,347 in March 2022 vs. 372 in March 2021; 4,465 in April 2022 vs. 409 in April 2021; 4,677 in May 2022 vs. 538 in May 2021). Information technology and artificial intelligence has provided significant supports in the management. The system might optimize the admission screening process, increases the communication inside and outside the ward, achieves early detection and diagnosis, timely isolates patients, and timely treatment of various types of children.

Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper built a dynamic mathematical model for TB transmission in China, and applied it to compare the epidemic trends 2021-2030 under a range of screening interventions focusing on children and adolescents.
Abstract: Tuberculosis (TB) continues to be prevalent in China also among children and adolescents in China. We built a dynamic mathematical model for TB transmission in China, and applied it to compare the epidemic trends 2021–2030 under a range of screening interventions focusing on children and adolescents. We developed a dynamic mathematical model with a flexible structure. The model can be applied either stochastically or deterministically, and can encompass arbitrary age structure and resistance levels. In the present version, we used the deterministic version excluding resistance but including age structure with six groups: 0–5, 6–11, 12–14, 15–17, 18–64, and 65 years and above. We parameterized the model by literature data and fitting it to case and death estimates provided by the World Health Organization. We compared the new TB cases and TB-related deaths in each age group over the period 2021–2030 in 10 scenarios that involved intensified screening of particular age groups of children, adolescents, or young adults, or decreased or increased diagnostic accuracy of the screening. Screening the entire age class of 18-year-old persons would prevent 517,000 TB cases and 14,600 TB-related deaths between years 2021 and 2030, corresponding to 6.6% and 5.5% decrease from the standard of care projection, respectively. Annual screening of children aged 6–11 and, to a lesser extent, 0–5 years, also reduced TB incidence and mortality, particularly among children of the respective ages but also in other age groups. In contrast, intensified screening of adolescents did not have a major impact. Screening with a simpler and less accurate method resulted in worsened outcomes, which could not be offset by more intensive screening. More accurate screening and better sensitivity to detect latent TB could prevent 2.3 million TB cases and 68,500 TB deaths in the coming 10 years. Routine screening in schools can efficiently reduce the burden of TB in China. Screening should be intensified particularly among children in primary school age.

Journal ArticleDOI
TL;DR: In this article , transfer learning was used to adapt the original model to a new institution, making it more robust and generalizable, which significantly improved the prediction of LV volumetric parameters obtained at Site 2 by adapting the model to another source.
Abstract: Artificial intelligence models trained in one site may not have acceptable performance when used in another site. Transfer learning (TL) can be used to adapt the original model to a new institution, making it more robust and generalizable. Performance of a 4D cardiac computed tomography angiography (CCTA) segmentation model trained at Site 1 was assessed at Site 2, before and after TL. Two separate image-annotated 4D CCTA datasets were collected at each site. Segmentation output from the model was used to measure left ventricular (LV) ejection fraction (EF), LV end-diastolic volume (EDV), and LV mass and compared with the ground-truth (measurements derived from the segmentation performed by trained radiologists). Wilcoxon signed-rank test (with 95% CI) was used to compare the absolute errors between predicted and ground-truth values obtained at Site 2 before and after TL. Test set at Site 2 included 45 patients (27 women, mean age 47.9 ± 10.8 years). There was a significant difference in absolute errors of LVEF (mean ± std 10.0 ± 6.0% vs 3.7 ± 2.5%, p < 0.05), LVEDV (mean ± std 8.4 ± 6.7 mL vs 5.9 ± 5.9 mL, p < 0.05) and LV mass (mean ± std 12.0 ± 11.6g vs 7.7 ± 9.9g, p < 0.05) when comparing model performance before and after TL at Site 2. The TL process significantly improved the prediction of LV volumetric parameters obtained at Site 2 by adapting the model to another source. A small number of annotated cases can be used to significantly improve a deep learning model developed elsewhere, increasing model generalizability and encouraging institutions to engage in artificial intelligence initiatives.

Journal ArticleDOI
TL;DR: Based on the epidemic data from website announced by Beijing Center for Disease Control and Prevention for the recent outbreak in Beijing from April 22nd to June 8th in 2022, this article developed a modified SEPIR model to mathematically simulate the customized dynamic COVID-zero strategy and project transmissions of the Omicron epidemic.
Abstract: The Omicron variant of SARS-COV-2 is replacing previously circulating variants around the world in 2022. Sporadic outbreaks of the Omicron variant into China have posed a concern how to properly response to battle against evolving coronavirus disease 2019 (COVID-19).Based on the epidemic data from website announced by Beijing Center for Disease Control and Prevention for the recent outbreak in Beijing from April 22nd to June 8th in 2022, we developed a modified SEPIR model to mathematically simulate the customized dynamic COVID-zero strategy and project transmissions of the Omicron epidemic. To demonstrate the effectiveness of dynamic-changing policies deployment during this outbreak control, we modified the transmission rate into four parts according to policy-changing dates as April 22nd to May 2nd, May 3rd to 11st, May 12th to 21st, May 22nd to June 8th, and we adopted Markov chain Monte Carlo (MCMC) to estimate different transmission rate. Then we altered the timing and scaling of these measures used to understand the effectiveness of these policies on the Omicron variant.The estimated effective reproduction number of four parts were 1.75 (95% CI 1.66-1.85), 0.89 (95% CI 0.79-0.99), 1.15 (95% CI 1.05-1.26) and 0.53 (95% CI 0.48 -0.60), respectively. In the experiment, we found that till June 8th the cumulative cases would rise to 132,609 (95% CI 59,667-250,639), 73.39 times of observed cumulative cases number 1,807 if no policy were implemented on May 3rd, and would be 3,235 (95% CI 1,909 - 4,954), increased by 79.03% if no policy were implemented on May 22nd. A 3-day delay of the implementation of policies would led to increase of cumulative cases by 58.28% and a 7-day delay would led to increase of cumulative cases by 187.00%. On the other hand, taking control measures 3 or 7 days in advance would result in merely 38.63% or 68.62% reduction of real cumulative cases. And if lockdown implemented 3 days before May 3rd, the cumulative cases would be 289 (95% CI 211-378), reduced by 84%, and the cumulative cases would be 853 (95% CI 578-1,183), reduced by 52.79% if lockdown implemented 3 days after May 3rd.The dynamic COVID-zero strategy might be able to effectively minimize the scale of the transmission, shorten the epidemic period and reduce the total number of infections.

Journal ArticleDOI
TL;DR: In this paper , the authors evaluated protocol influence on variables related to unhealthy behaviors improving dietary habits through a remote nutritional coaching approach and stimulating the population to increase physical activity through Exergames.
Abstract: Malnutrition (excess or defect) and sedentariness act as an accelerator in the older people frailty process. A systemic solution has been developed to engage older people in a healthier lifestyle using serious games and food monitoring. The study aimed to evaluate protocol influence on variables related to unhealthy behaviors improving dietary habits through a remote nutritional coaching approach and stimulating the population to increase physical activity through Exergames. Thirty-two subjects (25 Treatments and 7 Controls, aging 65–80 years), of which 15 (11 Treatments and 4 Controls) living in the UK (ACCORD and ExtraCare Villages placed in Shenley Wood (Milton Keynes), St. Crispin (Northampton), and Showell Court (Wolverhampton)) and 17 (14 Treatments and 3 Controls) in Italy (Genoa, Liguria), were recruited and characterized in terms of nutritional status, physical, somatometric, hemodynamic and biochemical measurements, and body composition. Participants were stimulated to adopt the Mediterranean dietary pattern, by a food diary diet-app, and perform regular physical activity, by the Exergame app, for three months. At the end of the trial, users underwent the same test battery. Data were tested for normality of distribution by the Shapiro-Wilk test. Comparisons between groups were performed at baseline by unpaired Student's t-test for continuous variables, chi-square test, or Fisher's exact test for categorical variables. Analysis of Variance (ANOVA) for repeated measures was used to analyze the significance of changes over time between groups. At the end of the trial, significant reductions of systolic (15 mmHg, P = 0.001), diastolic (5 mmHg, P = 0.025), mean (10 mmHg, P = 0.001) blood pressure, and rate-pressure product (RPP) (1,105 mmHg*bpm, P = 0.017) values were observed in DOREMI users. A trend of improvement of physical performance by the short physical performance battery (SPPB) was observed for balance and walk subtests. A significant decrease (0.91 kg, P = 0.043) in Body Mass Index (BMI) was observed in overweight subjects (BMI >25 kg/m2) after DOREMI intervention in the entire population. The Mini Nutritional Assessment (MNA) score (1, P = 0.004) significantly increased after intervention, while waist measure (3 cm, P <0.001) significantly decreased in the DOREMI users. A reduction in glycated hemoglobin (Hb) was registered (0.20%, P = 0.018) in the DOREMI UK users. Improvement of healthy behavior by technological tools, providing feedback between user and remote coach and increasing user's motivation, appears potentially effective. This information and communication technologies (ICT) approach offers an innovative solution to stimulate healthy eating and lifestyle behaviors, supporting clinicians in patient management.

Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors used the Web of Science Core Collection Database (WOSCC) to search the relevant articles on COVID-19 therapies published from January 1, 2020, to May 25, 2022, in the WOSCC.
Abstract: The coronavirus disease 2019 (COVID-19) pandemic is ravaging the world. Many therapies have been explored to treat COVID-19. This report aimed to assess the global research trends for the development of COVID-19 therapies.We searched the relevant articles on COVID-19 therapies published from January 1, 2020, to May 25, 2022, in the Web of Science Core Collection Database (WOSCC). VOSviewer 1.6.18 software was used to assess data on the countries, institutions, authors, collaborations, keywords, and journals that were most implicated in COVID-19 pharmacological research. The latest research and changing trends in COVID-19-relevant pharmacological research were analyzed.After manually eliminating articles that do not meet the requirements, a total of 5,289 studies authored by 32,932 researchers were eventually included in the analyses, which comprised 95 randomized controlled trials. 3,044 (57.6%) studies were published in 2021. The USA conducted the greatest number of studies, followed by China and India. The primary USA collaborators were China and England. The topics covered in the publications included: the general characteristics, the impact on pharmacists' work, the pharmacological research, broad-spectrum antiviral drug therapy and research, and promising targets or preventive measures, such as vaccine. The temporal diagram revealed that the current research hotspots focused on the vaccine, molecular docking, Mpro, and drug delivery keywords.Comprehensive bibliometric analysis could aid the rapid identification of the principal research topics, potential collaborators, and the direction of future research. Pharmacological research is critical for the development of therapeutic and preventive COVID-19-associated measures. This study may therefore provide valuable information for eradicating COVID-19.

Journal ArticleDOI
TL;DR: Based on the epidemic data from website announced by Beijing Center for Disease Control and Prevention for the recent outbreak in Beijing from April 22nd to June 8th in 2022, the authors developed a modified SEPIR model to mathematically simulate the customized dynamic COVID-zero strategy and project transmissions of the Omicron epidemic.
Abstract: The Omicron variant of SARS-COV-2 is replacing previously circulating variants around the world in 2022. Sporadic outbreaks of the Omicron variant into China have posed a concern how to properly response to battle against evolving coronavirus disease 2019 (COVID-19).Based on the epidemic data from website announced by Beijing Center for Disease Control and Prevention for the recent outbreak in Beijing from April 22nd to June 8th in 2022, we developed a modified SEPIR model to mathematically simulate the customized dynamic COVID-zero strategy and project transmissions of the Omicron epidemic. To demonstrate the effectiveness of dynamic-changing policies deployment during this outbreak control, we modified the transmission rate into four parts according to policy-changing dates as April 22nd to May 2nd, May 3rd to 11st, May 12th to 21st, May 22nd to June 8th, and we adopted Markov chain Monte Carlo (MCMC) to estimate different transmission rate. Then we altered the timing and scaling of these measures used to understand the effectiveness of these policies on the Omicron variant.The estimated effective reproduction number of four parts were 1.75 (95% CI 1.66-1.85), 0.89 (95% CI 0.79-0.99), 1.15 (95% CI 1.05-1.26) and 0.53 (95% CI 0.48 -0.60), respectively. In the experiment, we found that till June 8th the cumulative cases would rise to 132,609 (95% CI 59,667-250,639), 73.39 times of observed cumulative cases number 1,807 if no policy were implemented on May 3rd, and would be 3,235 (95% CI 1,909 - 4,954), increased by 79.03% if no policy were implemented on May 22nd. A 3-day delay of the implementation of policies would led to increase of cumulative cases by 58.28% and a 7-day delay would led to increase of cumulative cases by 187.00%. On the other hand, taking control measures 3 or 7 days in advance would result in merely 38.63% or 68.62% reduction of real cumulative cases. And if lockdown implemented 3 days before May 3rd, the cumulative cases would be 289 (95% CI 211-378), reduced by 84%, and the cumulative cases would be 853 (95% CI 578-1,183), reduced by 52.79% if lockdown implemented 3 days after May 3rd.The dynamic COVID-zero strategy might be able to effectively minimize the scale of the transmission, shorten the epidemic period and reduce the total number of infections.

Journal ArticleDOI
TL;DR: In this paper , the authors focus on the relationship between data scientists and clinicians, which often faces tensions commonly encountered by multidisciplinary teams and how to prevent these differences from creating divisions that can derail a project.
Abstract: Collaboration between data scientists and domain experts is necessary for the success of healthcare machine learning projects. Our present concern is the relationship between data scientists and clinicians, which often faces tensions commonly encountered by multidisciplinary teams. It is important to be able to prevent these differences from creating divisions that can derail a project. In this paper, we focus on understanding the interplay between these roles and where conflict can arise due to communication issues, varying incentives, and differing perspectives.