scispace - formally typeset
Search or ask a question
Author

Miaofei Han

Bio: Miaofei Han is an academic researcher. The author has contributed to research in topics: Image segmentation & Segmentation. The author has an hindex of 5, co-authored 8 publications receiving 487 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: A deep learning (DL) based segmentation system is developed to automatically quantify infection regions of interest (ROIs) and their volumetric ratios w.r.t. the lung and possible applications, including but not limited to analysis of follow-up CT scans and infection distributions in the lobes and segments correlated with clinical findings were discussed.
Abstract: CT imaging is crucial for diagnosis, assessment and staging COVID-19 infection. Follow-up scans every 3-5 days are often recommended for disease progression. It has been reported that bilateral and peripheral ground glass opacification (GGO) with or without consolidation are predominant CT findings in COVID-19 patients. However, due to lack of computerized quantification tools, only qualitative impression and rough description of infected areas are currently used in radiological reports. In this paper, a deep learning (DL)-based segmentation system is developed to automatically quantify infection regions of interest (ROIs) and their volumetric ratios w.r.t. the lung. The performance of the system was evaluated by comparing the automatically segmented infection regions with the manually-delineated ones on 300 chest CT scans of 300 COVID-19 patients. For fast manual delineation of training samples and possible manual intervention of automatic results, a human-in-the-loop (HITL) strategy has been adopted to assist radiologists for infection region segmentation, which dramatically reduced the total segmentation time to 4 minutes after 3 iterations of model updating. The average Dice simiarility coefficient showed 91.6% agreement between automatic and manual infaction segmentations, and the mean estimation error of percentage of infection (POI) was 0.3% for the whole lung. Finally, possible applications, including but not limited to analysis of follow-up CT scans and infection distributions in the lobes and segments correlated with clinical findings, were discussed.

546 citations

Journal ArticleDOI
TL;DR: A DL‐based segmentation system has been developed to automatically segment and quantify infection regions in CT scans of COVID‐19 patients andQuantitative evaluation indicated high accuracy in automatic infection delineation and severity prediction.
Abstract: Objective Computed tomography (CT) provides rich diagnosis and severity information of COVID-19 in clinical practice. However, there is no computerized tool to automatically delineate COVID-19 infection regions in chest CT scans for quantitative assessment in advanced applications such as severity prediction. The aim of this study was to develop a deep learning (DL)-based method for automatic segmentation and quantification of infection regions as well as the entire lungs from chest CT scans. Methods The DL-based segmentation method employs the "VB-Net" neural network to segment COVID-19 infection regions in CT scans. The developed DL-based segmentation system is trained by CT scans from 249 COVID-19 patients, and further validated by CT scans from other 300 COVID-19 patients. To accelerate the manual delineation of CT scans for training, a human-involved-model-iterations (HIMI) strategy is also adopted to assist radiologists to refine automatic annotation of each training case. To evaluate the performance of the DL-based segmentation system, three metrics, that is, Dice similarity coefficient, the differences of volume, and percentage of infection (POI), are calculated between automatic and manual segmentations on the validation set. Then, a clinical study on severity prediction is reported based on the quantitative infection assessment. Results The proposed DL-based segmentation system yielded Dice similarity coefficients of 91.6% ± 10.0% between automatic and manual segmentations, and a mean POI estimation error of 0.3% for the whole lung on the validation dataset. Moreover, compared with the cases with fully manual delineation that often takes hours, the proposed HIMI training strategy can dramatically reduce the delineation time to 4 min after three iterations of model updating. Besides, the best accuracy of severity prediction was 73.4% ± 1.3% when the mass of infection (MOI) of multiple lung lobes and bronchopulmonary segments were used as features for severity prediction, indicating the potential clinical application of our quantification technique on severity prediction. Conclusions A DL-based segmentation system has been developed to automatically segment and quantify infection regions in CT scans of COVID-19 patients. Quantitative evaluation indicated high accuracy in automatic infection delineation and severity prediction.

128 citations

Journal ArticleDOI
TL;DR: An Uncertainty Vertex-weighted Hypergraph Learning (UVHL) method to identify COVID-19 from CAP using CT images is proposed, which demonstrates the effectiveness and robustness of the proposed method on the identification of CO VID-19 in comparison to state-of-the-art methods.

54 citations

Proceedings ArticleDOI
13 Mar 2019
TL;DR: A multi-resolution 3D convolutional neural network to automatically segment kidneys of ADPKD patients from MR images to dramatically reduce the measurement of kidney volume from 40 minutes to about 1 second, which can greatly accelerate the disease staging of AD PKD patients for large clinical trials, promote the development of related drugs, and reduce the burden of physicians.
Abstract: Measurement of total kidney volume (TKV) plays an important role in the early therapeutic stage of autosomal dominant polycystic kidney disease (ADPKD). As a crucial biomarker, an accurate TKV can sensitively reflect the disease progression and be used as an indicator to evaluate the curative effect of the drug. However, manual contouring of kidneys in magnetic resonance (MR) images is time-consuming (40 minutes), which greatly hinders the wide adoption of TKV in clinic. In this paper, we propose a multi-resolution 3D convolutional neural network to automatically segment kidneys of ADPKD patients from MR images. We adopt two resolutions and use a customized V-Net model for both resolutions. The V-Net model is able to integrate both high-level context information with detailed local information for accurate organ segmentation. The V-Net model in the coarse resolution can robustly localize the kidneys, while the VNet model in the fine resolution can accurately refine the kidney boundaries. Validated on 305 subjects with different loss functions and network architectures, our method can achieve over 95% Dice similarity coefficient with the groundtruth labeled by a senior physician. Moreover, the proposed method can dramatically reduce the measurement of kidney volume from 40 minutes to about 1 second, which can greatly accelerate the disease staging of ADPKD patients for large clinical trials, promote the development of related drugs, and reduce the burden of physicians.

12 citations

Proceedings ArticleDOI
08 Mar 2019
TL;DR: The customized V-Net evaluated is very robust against various image artifacts, diseases and slice thicknesses, and has much better performance even on the organs with large shape variations than traditional methods.
Abstract: Accurate segmentation of organs at risk (OARs) is a key step in image guided radiation therapy. In recent years, deep learning based methods have been widely used in medical image segmentation. Among them, U-Net and V-Net are the most popular ones. In this paper, we evaluate a customized V-Net on 16 OARs throughout the body using a large CT dataset. Specifically, two customizations are used to reduce the GPU memory cost of V-Net: 1) multi-resolution V-Nets, where the coarse-resolution V-Net aims to localize the OAR in the entire image space, while the fine-resolution V-Net focuses on refining detailed boundaries of OAR; 2) a modified V-Net architecture, which is specifically designed for segmenting large organs, e.g., liver. Validated on 3483 CT scans of various imaging and disease conditions, we show that, compared with traditional methods, the customized V-Net wins in speed (0.7 second vs 20 seconds per organ), accuracy (average Dice score 96.6% vs 84.3%), and robustness (98.6% successful rate vs 83.3% successful rate). Moreover, the customized V-Net is very robust against various image artifacts, diseases and slice thicknesses, and has much better performance even on the organs with large shape variations (e.g., the bladder) than traditional methods.

8 citations


Cited by
More filters
Journal ArticleDOI
07 Apr 2020-BMJ
TL;DR: Proposed models for covid-19 are poorly reported, at high risk of bias, and their reported performance is probably optimistic, according to a review of published and preprint reports.
Abstract: Objective To review and appraise the validity and usefulness of published and preprint reports of prediction models for diagnosing coronavirus disease 2019 (covid-19) in patients with suspected infection, for prognosis of patients with covid-19, and for detecting people in the general population at increased risk of covid-19 infection or being admitted to hospital with the disease. Design Living systematic review and critical appraisal by the COVID-PRECISE (Precise Risk Estimation to optimise covid-19 Care for Infected or Suspected patients in diverse sEttings) group. Data sources PubMed and Embase through Ovid, up to 1 July 2020, supplemented with arXiv, medRxiv, and bioRxiv up to 5 May 2020. Study selection Studies that developed or validated a multivariable covid-19 related prediction model. Data extraction At least two authors independently extracted data using the CHARMS (critical appraisal and data extraction for systematic reviews of prediction modelling studies) checklist; risk of bias was assessed using PROBAST (prediction model risk of bias assessment tool). Results 37 421 titles were screened, and 169 studies describing 232 prediction models were included. The review identified seven models for identifying people at risk in the general population; 118 diagnostic models for detecting covid-19 (75 were based on medical imaging, 10 to diagnose disease severity); and 107 prognostic models for predicting mortality risk, progression to severe disease, intensive care unit admission, ventilation, intubation, or length of hospital stay. The most frequent types of predictors included in the covid-19 prediction models are vital signs, age, comorbidities, and image features. Flu-like symptoms are frequently predictive in diagnostic models, while sex, C reactive protein, and lymphocyte counts are frequent prognostic factors. Reported C index estimates from the strongest form of validation available per model ranged from 0.71 to 0.99 in prediction models for the general population, from 0.65 to more than 0.99 in diagnostic models, and from 0.54 to 0.99 in prognostic models. All models were rated at high or unclear risk of bias, mostly because of non-representative selection of control patients, exclusion of patients who had not experienced the event of interest by the end of the study, high risk of model overfitting, and unclear reporting. Many models did not include a description of the target population (n=27, 12%) or care setting (n=75, 32%), and only 11 (5%) were externally validated by a calibration plot. The Jehi diagnostic model and the 4C mortality score were identified as promising models. Conclusion Prediction models for covid-19 are quickly entering the academic literature to support medical decision making at a time when they are urgently needed. This review indicates that almost all pubished prediction models are poorly reported, and at high risk of bias such that their reported predictive performance is probably optimistic. However, we have identified two (one diagnostic and one prognostic) promising models that should soon be validated in multiple cohorts, preferably through collaborative efforts and data sharing to also allow an investigation of the stability and heterogeneity in their performance across populations and settings. Details on all reviewed models are publicly available at https://www.covprecise.org/. Methodological guidance as provided in this paper should be followed because unreliable predictions could cause more harm than benefit in guiding clinical decisions. Finally, prediction model authors should adhere to the TRIPOD (transparent reporting of a multivariable prediction model for individual prognosis or diagnosis) reporting guideline. Systematic review registration Protocol https://osf.io/ehc47/, registration https://osf.io/wy245. Readers’ note This article is a living systematic review that will be updated to reflect emerging evidence. Updates may occur for up to two years from the date of original publication. This version is update 3 of the original article published on 7 April 2020 (BMJ 2020;369:m1328). Previous updates can be found as data supplements (https://www.bmj.com/content/369/bmj.m1328/related#datasupp). When citing this paper please consider adding the update number and date of access for clarity.

2,183 citations

Journal ArticleDOI
TL;DR: Five pre-trained convolutional neural network-based models have been proposed for the detection of coronavirus pneumonia-infected patient using chest X-ray radiographs and it has been seen that the pre- trained ResNet50 model provides the highest classification performance.
Abstract: The 2019 novel coronavirus disease (COVID-19), with a starting point in China, has spread rapidly among people living in other countries, and is approaching approximately 34,986,502 cases worldwide according to the statistics of European Centre for Disease Prevention and Control. There are a limited number of COVID-19 test kits available in hospitals due to the increasing cases daily. Therefore, it is necessary to implement an automatic detection system as a quick alternative diagnosis option to prevent COVID-19 spreading among people. In this study, five pre-trained convolutional neural network based models (ResNet50, ResNet101, ResNet152, InceptionV3 and Inception-ResNetV2) have been proposed for the detection of coronavirus pneumonia infected patient using chest X-ray radiographs. We have implemented three different binary classifications with four classes (COVID-19, normal (healthy), viral pneumonia and bacterial pneumonia) by using 5-fold cross validation. Considering the performance results obtained, it has seen that the pre-trained ResNet50 model provides the highest classification performance (96.1% accuracy for Dataset-1, 99.5% accuracy for Dataset-2 and 99.7% accuracy for Dataset-3) among other four used models.

1,040 citations

Journal ArticleDOI
TL;DR: This review paper covers the entire pipeline of medical imaging and analysis techniques involved with COVID-19, including image acquisition, segmentation, diagnosis, and follow-up, and particularly focuses on the integration of AI with X-ray and CT, both of which are widely used in the frontline hospitals.
Abstract: The pandemic of coronavirus disease 2019 (COVID-19) is spreading all over the world. Medical imaging such as X-ray and computed tomography (CT) plays an essential role in the global fight against COVID-19, whereas the recently emerging artificial intelligence (AI) technologies further strengthen the power of the imaging tools and help medical specialists. We hereby review the rapid responses in the community of medical imaging (empowered by AI) toward COVID-19. For example, AI-empowered image acquisition can significantly help automate the scanning procedure and also reshape the workflow with minimal contact to patients, providing the best protection to the imaging technicians. Also, AI can improve work efficiency by accurate delineation of infections in X-ray and CT images, facilitating subsequent quantification. Moreover, the computer-aided platforms help radiologists make clinical decisions, i.e., for disease diagnosis, tracking, and prognosis. In this review paper, we thus cover the entire pipeline of medical imaging and analysis techniques involved with COVID-19, including image acquisition, segmentation, diagnosis, and follow-up. We particularly focus on the integration of AI with X-ray and CT, both of which are widely used in the frontline hospitals, in order to depict the latest progress of medical imaging and radiology fighting against COVID-19.

916 citations

Journal ArticleDOI
TL;DR: In this paper, five pre-trained convolutional neural network-based models were proposed for the detection of coronavirus pneumonia-infected patient using chest X-ray radiographs.
Abstract: The 2019 novel coronavirus disease (COVID-19), with a starting point in China, has spread rapidly among people living in other countries and is approaching approximately 101,917,147 cases worldwide according to the statistics of World Health Organization. There are a limited number of COVID-19 test kits available in hospitals due to the increasing cases daily. Therefore, it is necessary to implement an automatic detection system as a quick alternative diagnosis option to prevent COVID-19 spreading among people. In this study, five pre-trained convolutional neural network-based models (ResNet50, ResNet101, ResNet152, InceptionV3 and Inception-ResNetV2) have been proposed for the detection of coronavirus pneumonia-infected patient using chest X-ray radiographs. We have implemented three different binary classifications with four classes (COVID-19, normal (healthy), viral pneumonia and bacterial pneumonia) by using five-fold cross-validation. Considering the performance results obtained, it has been seen that the pre-trained ResNet50 model provides the highest classification performance (96.1% accuracy for Dataset-1, 99.5% accuracy for Dataset-2 and 99.7% accuracy for Dataset-3) among other four used models.

769 citations

Journal ArticleDOI
TL;DR: Li et al. as discussed by the authors proposed a COVID-19 Lung Infection Segmentation Deep Network ( Inf-Net) to automatically identify infected regions from chest CT slices, where a parallel partial decoder is used to aggregate the high-level features and generate a global map.
Abstract: Coronavirus Disease 2019 (COVID-19) spread globally in early 2020, causing the world to face an existential health crisis. Automated detection of lung infections from computed tomography (CT) images offers a great potential to augment the traditional healthcare strategy for tackling COVID-19. However, segmenting infected regions from CT slices faces several challenges, including high variation in infection characteristics, and low intensity contrast between infections and normal tissues. Further, collecting a large amount of data is impractical within a short time period, inhibiting the training of a deep model. To address these challenges, a novel COVID-19 Lung Infection Segmentation Deep Network ( Inf-Net ) is proposed to automatically identify infected regions from chest CT slices. In our Inf-Net , a parallel partial decoder is used to aggregate the high-level features and generate a global map. Then, the implicit reverse attention and explicit edge-attention are utilized to model the boundaries and enhance the representations. Moreover, to alleviate the shortage of labeled data, we present a semi-supervised segmentation framework based on a randomly selected propagation strategy, which only requires a few labeled images and leverages primarily unlabeled data. Our semi-supervised framework can improve the learning ability and achieve a higher performance. Extensive experiments on our COVID-SemiSeg and real CT volumes demonstrate that the proposed Inf-Net outperforms most cutting-edge segmentation models and advances the state-of-the-art performance.

633 citations