scispace - formally typeset
Search or ask a question
Author

Zhenghan Fang

Other affiliations: Fudan University
Bio: Zhenghan Fang is an academic researcher from University of North Carolina at Chapel Hill. The author has contributed to research in topics: Computer science & Segmentation. The author has an hindex of 7, co-authored 13 publications receiving 1087 citations. Previous affiliations of Zhenghan Fang include Fudan University.

Papers
More filters
Journal ArticleDOI
TL;DR: A deep learning model was developed to extract visual features from volumetric chest CT scans for the detection of coronavirus 2019 and differentiate it from community-acquired pneumonia and other lung conditions.
Abstract: Background Coronavirus disease 2019 (COVID-19) has widely spread all over the world since the beginning of 2020. It is desirable to develop automatic and accurate detection of COVID-19 using chest CT. Purpose To develop a fully automatic framework to detect COVID-19 using chest CT and evaluate its performance. Materials and Methods In this retrospective and multicenter study, a deep learning model, the COVID-19 detection neural network (COVNet), was developed to extract visual features from volumetric chest CT scans for the detection of COVID-19. CT scans of community-acquired pneumonia (CAP) and other non-pneumonia abnormalities were included to test the robustness of the model. The datasets were collected from six hospitals between August 2016 and February 2020. Diagnostic performance was assessed with the area under the receiver operating characteristic curve, sensitivity, and specificity. Results The collected dataset consisted of 4352 chest CT scans from 3322 patients. The average patient age (±standard deviation) was 49 years ± 15, and there were slightly more men than women (1838 vs 1484, respectively; P = .29). The per-scan sensitivity and specificity for detecting COVID-19 in the independent test set was 90% (95% confidence interval [CI]: 83%, 94%; 114 of 127 scans) and 96% (95% CI: 93%, 98%; 294 of 307 scans), respectively, with an area under the receiver operating characteristic curve of 0.96 (P < .001). The per-scan sensitivity and specificity for detecting CAP in the independent test set was 87% (152 of 175 scans) and 92% (239 of 259 scans), respectively, with an area under the receiver operating characteristic curve of 0.95 (95% CI: 0.93, 0.97). Conclusion A deep learning model can accurately detect coronavirus 2019 and differentiate it from community-acquired pneumonia and other lung conditions. © RSNA, 2020 Online supplemental material is available for this article.

1,505 citations

Journal ArticleDOI
TL;DR: A spatially constrained quantification method that uses the signals at multiple neighboring pixels to better estimate tissue properties at the central pixel is proposed and a unique two-step deep learning model is designed that learns the mapping from the observed signals to the desired properties for tissue quantification.
Abstract: Magnetic resonance fingerprinting (MRF) is a quantitative imaging technique that can simultaneously measure multiple important tissue properties of human body. Although MRF has demonstrated improved scan efficiency as compared to conventional techniques, further acceleration is still desired for translation into routine clinical practice. The purpose of this paper is to accelerate MRF acquisition by developing a new tissue quantification method for MRF that allows accurate quantification with fewer sampling data. Most of the existing approaches use the MRF signal evolution at each individual pixel to estimate tissue properties, without considering the spatial association among neighboring pixels. In this paper, we propose a spatially constrained quantification method that uses the signals at multiple neighboring pixels to better estimate tissue properties at the central pixel. Specifically, we design a unique two-step deep learning model that learns the mapping from the observed signals to the desired properties for tissue quantification, i.e.: 1) with a feature extraction module for reducing the dimension of signals by extracting a low-dimensional feature vector from the high-dimensional signal evolution and 2) a spatially constrained quantification module for exploiting the spatial information from the extracted feature maps to generate the final tissue property map. A corresponding two-step training strategy is developed for network training. The proposed method is tested on highly undersampled MRF data acquired from human brains. Experimental results demonstrate that our method can achieve accurate quantification for T1 and T2 relaxation times by using only 1/4 time points of the original sequence (i.e., four times of acceleration for MRF acquisition).

90 citations

Journal ArticleDOI
TL;DR: Results of quantitative T1 and T2 maps demonstrate that improved tissue characterization can be achieved using the proposed method as compared to prior methods, and make high-resolution whole-brain quantitative MR imaging feasible for clinical applications.

49 citations

Book ChapterDOI
13 Oct 2019
TL;DR: A novel deep learning approach, namely residual channel attention U-Net (RCA-U-Net), to perform the tissue quantification task in MRF, which improves the accuracy of T2 quantification with MRF under high acceleration rates as compared to the state-of-the-art methods.
Abstract: Magnetic resonance fingerprinting (MRF) is a relatively new imaging framework which allows rapid and simultaneous quantification of multiple tissue properties, such as T1 and T2 relaxation times, in one acquisition. To accelerate the data sampling in MRF, a variety of methods have been proposed to extract tissue properties from highly accelerated MRF signals. While these methods have demonstrated promising results, further improvement in the accuracy, especially for T2 quantification, is needed. In this paper, we present a novel deep learning approach, namely residual channel attention U-Net (RCA-U-Net), to perform the tissue quantification task in MRF. The RCA-U-Net combines the U-Net structure with residual channel attention blocks, to make the network focus on more informative features and produce better quantification results. In addition, we improved the preprocessing of MRF data by masking out the noisy signals in the background for improved quantification at tissue boundaries. Our experimental results on two in vivo brain datasets with different spatial resolutions demonstrate that the proposed method improves the accuracy of T2 quantification with MRF under high acceleration rates (i.e., 8 and 16) as compared to the state-of-the-art methods.

36 citations

Journal ArticleDOI
TL;DR: Experimental results show that the proposed SSC U-Net is able to reconstruct ultrasound images with improved quality and preserve more details in the reconstructed images and improve full width at half maximum (FWHM) of point targets by 3.23%.
Abstract: Pursuing better imaging quality and miniaturizing imaging devices are two trends in the current development of ultrasound imaging. While the first one leads to more complex and expensive imaging equipment, poor image quality is a common problem of portable ultrasound imaging systems. In this paper, an image reconstruction method was proposed to break through the imaging quality limitation of portable devices by introducing generative adversarial network (GAN) model into the field of ultrasound image reconstruction. We combined two GAN generator models, the encoder-decoder model and the U-Net model to build a sparse skip connection U-Net (SSC U-Net) to tackle this problem. To produce more realistic output, stabilize the training procedure, and improve spatial resolution in the reconstructed ultrasound images, a new loss function which combines adversarial loss, L1 loss, and differential loss was proposed. Three datasets including 50 pairs of simulation, 40 pairs of phantom, and 72 pairs of in vivo images were used to evaluate the reconstruction performance. Experimental results show that our SSC U-Net is able to reconstruct ultrasound images with improved quality. Compared with U-Net, our SSC U-Net is able to preserve more details in the reconstructed images and improve full width at half maximum (FWHM) of point targets by 3.23%.

31 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: COVID-Net is introduced, a deep convolutional neural network design tailored for the detection of COVID-19 cases from chest X-ray (CXR) images that is open source and available to the general public, and COVIDx, an open access benchmark dataset comprising of 13,975 CXR images across 13,870 patient patient cases.
Abstract: The Coronavirus Disease 2019 (COVID-19) pandemic continues to have a devastating effect on the health and well-being of the global population. A critical step in the fight against COVID-19 is effective screening of infected patients, with one of the key screening approaches being radiology examination using chest radiography. It was found in early studies that patients present abnormalities in chest radiography images that are characteristic of those infected with COVID-19. Motivated by this and inspired by the open source efforts of the research community, in this study we introduce COVID-Net, a deep convolutional neural network design tailored for the detection of COVID-19 cases from chest X-ray (CXR) images that is open source and available to the general public. To the best of the authors' knowledge, COVID-Net is one of the first open source network designs for COVID-19 detection from CXR images at the time of initial release. We also introduce COVIDx, an open access benchmark dataset that we generated comprising of 13,975 CXR images across 13,870 patient patient cases, with the largest number of publicly available COVID-19 positive cases to the best of the authors' knowledge. Furthermore, we investigate how COVID-Net makes predictions using an explainability method in an attempt to not only gain deeper insights into critical factors associated with COVID cases, which can aid clinicians in improved screening, but also audit COVID-Net in a responsible and transparent manner to validate that it is making decisions based on relevant information from the CXR images. By no means a production-ready solution, the hope is that the open access COVID-Net, along with the description on constructing the open source COVIDx dataset, will be leveraged and build upon by both researchers and citizen data scientists alike to accelerate the development of highly accurate yet practical deep learning solutions for detecting COVID-19 cases and accelerate treatment of those who need it the most.

2,193 citations

Journal ArticleDOI
07 Apr 2020-BMJ
TL;DR: Proposed models for covid-19 are poorly reported, at high risk of bias, and their reported performance is probably optimistic, according to a review of published and preprint reports.
Abstract: Objective To review and appraise the validity and usefulness of published and preprint reports of prediction models for diagnosing coronavirus disease 2019 (covid-19) in patients with suspected infection, for prognosis of patients with covid-19, and for detecting people in the general population at increased risk of covid-19 infection or being admitted to hospital with the disease. Design Living systematic review and critical appraisal by the COVID-PRECISE (Precise Risk Estimation to optimise covid-19 Care for Infected or Suspected patients in diverse sEttings) group. Data sources PubMed and Embase through Ovid, up to 1 July 2020, supplemented with arXiv, medRxiv, and bioRxiv up to 5 May 2020. Study selection Studies that developed or validated a multivariable covid-19 related prediction model. Data extraction At least two authors independently extracted data using the CHARMS (critical appraisal and data extraction for systematic reviews of prediction modelling studies) checklist; risk of bias was assessed using PROBAST (prediction model risk of bias assessment tool). Results 37 421 titles were screened, and 169 studies describing 232 prediction models were included. The review identified seven models for identifying people at risk in the general population; 118 diagnostic models for detecting covid-19 (75 were based on medical imaging, 10 to diagnose disease severity); and 107 prognostic models for predicting mortality risk, progression to severe disease, intensive care unit admission, ventilation, intubation, or length of hospital stay. The most frequent types of predictors included in the covid-19 prediction models are vital signs, age, comorbidities, and image features. Flu-like symptoms are frequently predictive in diagnostic models, while sex, C reactive protein, and lymphocyte counts are frequent prognostic factors. Reported C index estimates from the strongest form of validation available per model ranged from 0.71 to 0.99 in prediction models for the general population, from 0.65 to more than 0.99 in diagnostic models, and from 0.54 to 0.99 in prognostic models. All models were rated at high or unclear risk of bias, mostly because of non-representative selection of control patients, exclusion of patients who had not experienced the event of interest by the end of the study, high risk of model overfitting, and unclear reporting. Many models did not include a description of the target population (n=27, 12%) or care setting (n=75, 32%), and only 11 (5%) were externally validated by a calibration plot. The Jehi diagnostic model and the 4C mortality score were identified as promising models. Conclusion Prediction models for covid-19 are quickly entering the academic literature to support medical decision making at a time when they are urgently needed. This review indicates that almost all pubished prediction models are poorly reported, and at high risk of bias such that their reported predictive performance is probably optimistic. However, we have identified two (one diagnostic and one prognostic) promising models that should soon be validated in multiple cohorts, preferably through collaborative efforts and data sharing to also allow an investigation of the stability and heterogeneity in their performance across populations and settings. Details on all reviewed models are publicly available at https://www.covprecise.org/. Methodological guidance as provided in this paper should be followed because unreliable predictions could cause more harm than benefit in guiding clinical decisions. Finally, prediction model authors should adhere to the TRIPOD (transparent reporting of a multivariable prediction model for individual prognosis or diagnosis) reporting guideline. Systematic review registration Protocol https://osf.io/ehc47/, registration https://osf.io/wy245. Readers’ note This article is a living systematic review that will be updated to reflect emerging evidence. Updates may occur for up to two years from the date of original publication. This version is update 3 of the original article published on 7 April 2020 (BMJ 2020;369:m1328). Previous updates can be found as data supplements (https://www.bmj.com/content/369/bmj.m1328/related#datasupp). When citing this paper please consider adding the update number and date of access for clarity.

2,183 citations

Journal ArticleDOI
TL;DR: Analysis of epidemiological, diagnostic, clinical, and therapeutic aspects, including perspectives of vaccines and preventive measures that have already been globally recommended to counter this pandemic virus, suggest that this novel virus has been transferred from an animal source, such as bats.
Abstract: SUMMARYIn recent decades, several new diseases have emerged in different geographical areas, with pathogens including Ebola virus, Zika virus, Nipah virus, and coronaviruses (CoVs). Recently, a new type of viral infection emerged in Wuhan City, China, and initial genomic sequencing data of this virus do not match with previously sequenced CoVs, suggesting a novel CoV strain (2019-nCoV), which has now been termed severe acute respiratory syndrome CoV-2 (SARS-CoV-2). Although coronavirus disease 2019 (COVID-19) is suspected to originate from an animal host (zoonotic origin) followed by human-to-human transmission, the possibility of other routes should not be ruled out. Compared to diseases caused by previously known human CoVs, COVID-19 shows less severe pathogenesis but higher transmission competence, as is evident from the continuously increasing number of confirmed cases globally. Compared to other emerging viruses, such as Ebola virus, avian H7N9, SARS-CoV, and Middle East respiratory syndrome coronavirus (MERS-CoV), SARS-CoV-2 has shown relatively low pathogenicity and moderate transmissibility. Codon usage studies suggest that this novel virus has been transferred from an animal source, such as bats. Early diagnosis by real-time PCR and next-generation sequencing has facilitated the identification of the pathogen at an early stage. Since no antiviral drug or vaccine exists to treat or prevent SARS-CoV-2, potential therapeutic strategies that are currently being evaluated predominantly stem from previous experience with treating SARS-CoV, MERS-CoV, and other emerging viral diseases. In this review, we address epidemiological, diagnostic, clinical, and therapeutic aspects, including perspectives of vaccines and preventive measures that have already been globally recommended to counter this pandemic virus.

1,011 citations

Journal ArticleDOI
TL;DR: In this article, the authors provide a short overview of recent advances and some associated challenges in machine learning applied to medical image processing and image analysis, and provide a starting point for people interested in experimenting and perhaps contributing to the field of machine learning for medical imaging.
Abstract: What has happened in machine learning lately, and what does it mean for the future of medical image analysis? Machine learning has witnessed a tremendous amount of attention over the last few years. The current boom started around 2009 when so-called deep artificial neural networks began outperforming other established models on a number of important benchmarks. Deep neural networks are now the state-of-the-art machine learning models across a variety of areas, from image analysis to natural language processing, and widely deployed in academia and industry. These developments have a huge potential for medical imaging technology, medical data analysis, medical diagnostics and healthcare in general, slowly being realized. We provide a short overview of recent advances and some associated challenges in machine learning applied to medical image processing and image analysis. As this has become a very broad and fast expanding field we will not survey the entire landscape of applications, but put particular focus on deep learning in MRI. Our aim is threefold: (i) give a brief introduction to deep learning with pointers to core references; (ii) indicate how deep learning has been applied to the entire MRI processing chain, from acquisition to image retrieval, from segmentation to disease prediction; (iii) provide a starting point for people interested in experimenting and perhaps contributing to the field of machine learning for medical imaging by pointing out good educational resources, state-of-the-art open-source code, and interesting sources of data and problems related medical imaging.

991 citations

Journal ArticleDOI
TL;DR: This review paper covers the entire pipeline of medical imaging and analysis techniques involved with COVID-19, including image acquisition, segmentation, diagnosis, and follow-up, and particularly focuses on the integration of AI with X-ray and CT, both of which are widely used in the frontline hospitals.
Abstract: The pandemic of coronavirus disease 2019 (COVID-19) is spreading all over the world. Medical imaging such as X-ray and computed tomography (CT) plays an essential role in the global fight against COVID-19, whereas the recently emerging artificial intelligence (AI) technologies further strengthen the power of the imaging tools and help medical specialists. We hereby review the rapid responses in the community of medical imaging (empowered by AI) toward COVID-19. For example, AI-empowered image acquisition can significantly help automate the scanning procedure and also reshape the workflow with minimal contact to patients, providing the best protection to the imaging technicians. Also, AI can improve work efficiency by accurate delineation of infections in X-ray and CT images, facilitating subsequent quantification. Moreover, the computer-aided platforms help radiologists make clinical decisions, i.e., for disease diagnosis, tracking, and prognosis. In this review paper, we thus cover the entire pipeline of medical imaging and analysis techniques involved with COVID-19, including image acquisition, segmentation, diagnosis, and follow-up. We particularly focus on the integration of AI with X-ray and CT, both of which are widely used in the frontline hospitals, in order to depict the latest progress of medical imaging and radiology fighting against COVID-19.

916 citations