scispace - formally typeset
Search or ask a question
Author

Youbing Yin

Other affiliations: Tsinghua University
Bio: Youbing Yin is an academic researcher from University of Iowa. The author has contributed to research in topics: Image registration & Tree structure. The author has an hindex of 21, co-authored 98 publications receiving 2457 citations. Previous affiliations of Youbing Yin include Tsinghua University.


Papers
More filters
Journal ArticleDOI
TL;DR: A deep learning model was developed to extract visual features from volumetric chest CT scans for the detection of coronavirus 2019 and differentiate it from community-acquired pneumonia and other lung conditions.
Abstract: Background Coronavirus disease 2019 (COVID-19) has widely spread all over the world since the beginning of 2020. It is desirable to develop automatic and accurate detection of COVID-19 using chest CT. Purpose To develop a fully automatic framework to detect COVID-19 using chest CT and evaluate its performance. Materials and Methods In this retrospective and multicenter study, a deep learning model, the COVID-19 detection neural network (COVNet), was developed to extract visual features from volumetric chest CT scans for the detection of COVID-19. CT scans of community-acquired pneumonia (CAP) and other non-pneumonia abnormalities were included to test the robustness of the model. The datasets were collected from six hospitals between August 2016 and February 2020. Diagnostic performance was assessed with the area under the receiver operating characteristic curve, sensitivity, and specificity. Results The collected dataset consisted of 4352 chest CT scans from 3322 patients. The average patient age (±standard deviation) was 49 years ± 15, and there were slightly more men than women (1838 vs 1484, respectively; P = .29). The per-scan sensitivity and specificity for detecting COVID-19 in the independent test set was 90% (95% confidence interval [CI]: 83%, 94%; 114 of 127 scans) and 96% (95% CI: 93%, 98%; 294 of 307 scans), respectively, with an area under the receiver operating characteristic curve of 0.96 (P < .001). The per-scan sensitivity and specificity for detecting CAP in the independent test set was 87% (152 of 175 scans) and 92% (239 of 259 scans), respectively, with an area under the receiver operating characteristic curve of 0.95 (95% CI: 0.93, 0.97). Conclusion A deep learning model can accurately detect coronavirus 2019 and differentiate it from community-acquired pneumonia and other lung conditions. © RSNA, 2020 Online supplemental material is available for this article.

1,505 citations

Journal ArticleDOI
TL;DR: A new similarity criterion is introduced to take into account changes in reconstructed Hounsfield units (scaled attenuation coefficient, HU) with inflation to minimize the local tissue volume difference within the lungs between matched regions, thus preserving the tissue mass of the lungs if the tissue density is assumed to be relatively constant.
Abstract: The authors propose a nonrigid image registration approach to align two computed-tomography (CT)-derived lung datasets acquired during breath-holds at two inspiratory levels when the image distortion between the two volumes is large. The goal is to derive a three-dimensional warping function that can be used in association with computational fluid dynamics studies. In contrast to the sum of squared intensity difference (SSD), a new similarity criterion, the sum of squared tissue volume difference (SSTVD), is introduced to take into account changes in reconstructed Hounsfield units (scaled attenuation coefficient, HU) with inflation. This new criterion aims to minimize the local tissue volume difference within the lungs between matched regions, thus preserving the tissue mass of the lungs if the tissue density is assumed to be relatively constant. The local tissue volume difference is contributed by two factors: Change in the regional volume due to the deformation and change in the fractional tissue content in a region due to inflation. The change in the regional volume is calculated from the Jacobian value derived from the warping function and the change in the fractional tissue content is estimated from reconstructed HU based on quantitative CT measures. A composite of multilevel B-spline is adopted to deform images and a sufficient condition is imposed to ensure a one-to-one mapping even for a registration pair with large volume difference. Parameters of the transformation model are optimized by a limited-memory quasi-Newton minimization approach in a multiresolution framework. To evaluate the effectiveness of the new similarity measure, the authors performed registrations for six lung volume pairs. Over 100 annotated landmarks located at vessel bifurcations were generated using a semiautomatic system. The results show that the SSTVD method yields smaller average landmark errors than the SSD method across all six registration pairs.

218 citations

Journal ArticleDOI
TL;DR: A novel image-based technique to estimate a subject-specific boundary condition (BC) for computational fluid dynamics (CFD) simulation of pulmonary air flow shows that the proposed method predicts lobar volume changes consistent with direct image-measured metrics.

148 citations

Journal ArticleDOI
TL;DR: The proposed CNN-RNN deep learning framework was able to accurately detect ICH and its subtypes with fast speed, suggesting its potential for assisting radiologists and physicians in their clinical diagnosis workflow.
Abstract: To evaluate the performance of a novel three-dimensional (3D) joint convolutional and recurrent neural network (CNN-RNN) for the detection of intracranial hemorrhage (ICH) and its five subtypes (cerebral parenchymal, intraventricular, subdural, epidural, and subarachnoid) in non-contrast head CT. A total of 2836 subjects (ICH/normal, 1836/1000) from three institutions were included in this ethically approved retrospective study, with a total of 76,621 slices from non-contrast head CT scans. ICH and its five subtypes were annotated by three independent experienced radiologists, with majority voting as reference standard for both the subject level and the slice level. Ninety percent of data was used for training and validation, and the rest 10% for final evaluation. A joint CNN-RNN classification framework was proposed, with the flexibility to train when subject-level or slice-level labels are available. The predictions were compared with the interpretations from three junior radiology trainees and an additional senior radiologist. It took our algorithm less than 30 s on average to process a 3D CT scan. For the two-type classification task (predicting bleeding or not), our algorithm achieved excellent values (≥ 0.98) across all reporting metrics on the subject level. For the five-type classification task (predicting five subtypes), our algorithm achieved > 0.8 AUC across all subtypes. The performance of our algorithm was generally superior to the average performance of the junior radiology trainees for both two-type and five-type classification tasks. The proposed method was able to accurately detect ICH and its subtypes with fast speed, suggesting its potential for assisting radiologists and physicians in their clinical diagnosis workflow. • A 3D joint CNN-RNN deep learning framework was developed for ICH detection and subtype classification, which has the flexibility to train with either subject-level labels or slice-level labels. • This deep learning framework is fast and accurate at detecting ICH and its subtypes. • The performance of the automated algorithm was superior to the average performance of three junior radiology trainees in this work, suggesting its potential to reduce initial misinterpretations.

147 citations

Journal ArticleDOI
TL;DR: PMBF was reduced in mild COPD, including in regions of lung without frank emphysema, and may represent a distinct pathological process from small airways disease, which may provide an imaging biomarker for therapeutic strategies targeting the pulmonary microvasculature.
Abstract: Rationale: Smoking-related microvascular loss causes end-organ damage in the kidneys, heart, and brain. Basic research suggests a similar process in the lungs, but no large studies have assessed pulmonary microvascular blood flow (PMBF) in early chronic lung disease.Objectives: To investigate whether PMBF is reduced in mild as well as more severe chronic obstructive pulmonary disease (COPD) and emphysema.Methods: PMBF was measured using gadolinium-enhanced magnetic resonance imaging (MRI) among smokers with COPD and control subjects age 50 to 79 years without clinical cardiovascular disease. COPD severity was defined by standard criteria. Emphysema on computed tomography (CT) was defined by the percentage of lung regions below −950 Hounsfield units (−950 HU) and by radiologists using a standard protocol. We adjusted for potential confounders, including smoking, oxygenation, and left ventricular cardiac output.Measurements and Main Results: Among 144 participants, PMBF was reduced by 30% in mild COPD, by 2...

132 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: The software consists of a collection of algorithms that are commonly used to solve medical image registration problems, and allows the user to quickly configure, test, and compare different registration methods for a specific application.
Abstract: Medical image registration is an important task in medical image processing. It refers to the process of aligning data sets, possibly from different modalities (e.g., magnetic resonance and computed tomography), different time points (e.g., follow-up scans), and/or different subjects (in case of population studies). A large number of methods for image registration are described in the literature. Unfortunately, there is not one method that works for all applications. We have therefore developed elastix, a publicly available computer program for intensity-based medical image registration. The software consists of a collection of algorithms that are commonly used to solve medical image registration problems. The modular design of elastix allows the user to quickly configure, test, and compare different registration methods for a specific application. The command-line interface enables automated processing of large numbers of data sets, by means of scripting. The usage of elastix for comparing different registration methods is illustrated with three example experiments, in which individual components of the registration method are varied.

3,444 citations

Journal ArticleDOI
TL;DR: COVID-Net is introduced, a deep convolutional neural network design tailored for the detection of COVID-19 cases from chest X-ray (CXR) images that is open source and available to the general public, and COVIDx, an open access benchmark dataset comprising of 13,975 CXR images across 13,870 patient patient cases.
Abstract: The Coronavirus Disease 2019 (COVID-19) pandemic continues to have a devastating effect on the health and well-being of the global population. A critical step in the fight against COVID-19 is effective screening of infected patients, with one of the key screening approaches being radiology examination using chest radiography. It was found in early studies that patients present abnormalities in chest radiography images that are characteristic of those infected with COVID-19. Motivated by this and inspired by the open source efforts of the research community, in this study we introduce COVID-Net, a deep convolutional neural network design tailored for the detection of COVID-19 cases from chest X-ray (CXR) images that is open source and available to the general public. To the best of the authors' knowledge, COVID-Net is one of the first open source network designs for COVID-19 detection from CXR images at the time of initial release. We also introduce COVIDx, an open access benchmark dataset that we generated comprising of 13,975 CXR images across 13,870 patient patient cases, with the largest number of publicly available COVID-19 positive cases to the best of the authors' knowledge. Furthermore, we investigate how COVID-Net makes predictions using an explainability method in an attempt to not only gain deeper insights into critical factors associated with COVID cases, which can aid clinicians in improved screening, but also audit COVID-Net in a responsible and transparent manner to validate that it is making decisions based on relevant information from the CXR images. By no means a production-ready solution, the hope is that the open access COVID-Net, along with the description on constructing the open source COVIDx dataset, will be leveraged and build upon by both researchers and citizen data scientists alike to accelerate the development of highly accurate yet practical deep learning solutions for detecting COVID-19 cases and accelerate treatment of those who need it the most.

2,193 citations

Journal ArticleDOI
07 Apr 2020-BMJ
TL;DR: Proposed models for covid-19 are poorly reported, at high risk of bias, and their reported performance is probably optimistic, according to a review of published and preprint reports.
Abstract: Objective To review and appraise the validity and usefulness of published and preprint reports of prediction models for diagnosing coronavirus disease 2019 (covid-19) in patients with suspected infection, for prognosis of patients with covid-19, and for detecting people in the general population at increased risk of covid-19 infection or being admitted to hospital with the disease. Design Living systematic review and critical appraisal by the COVID-PRECISE (Precise Risk Estimation to optimise covid-19 Care for Infected or Suspected patients in diverse sEttings) group. Data sources PubMed and Embase through Ovid, up to 1 July 2020, supplemented with arXiv, medRxiv, and bioRxiv up to 5 May 2020. Study selection Studies that developed or validated a multivariable covid-19 related prediction model. Data extraction At least two authors independently extracted data using the CHARMS (critical appraisal and data extraction for systematic reviews of prediction modelling studies) checklist; risk of bias was assessed using PROBAST (prediction model risk of bias assessment tool). Results 37 421 titles were screened, and 169 studies describing 232 prediction models were included. The review identified seven models for identifying people at risk in the general population; 118 diagnostic models for detecting covid-19 (75 were based on medical imaging, 10 to diagnose disease severity); and 107 prognostic models for predicting mortality risk, progression to severe disease, intensive care unit admission, ventilation, intubation, or length of hospital stay. The most frequent types of predictors included in the covid-19 prediction models are vital signs, age, comorbidities, and image features. Flu-like symptoms are frequently predictive in diagnostic models, while sex, C reactive protein, and lymphocyte counts are frequent prognostic factors. Reported C index estimates from the strongest form of validation available per model ranged from 0.71 to 0.99 in prediction models for the general population, from 0.65 to more than 0.99 in diagnostic models, and from 0.54 to 0.99 in prognostic models. All models were rated at high or unclear risk of bias, mostly because of non-representative selection of control patients, exclusion of patients who had not experienced the event of interest by the end of the study, high risk of model overfitting, and unclear reporting. Many models did not include a description of the target population (n=27, 12%) or care setting (n=75, 32%), and only 11 (5%) were externally validated by a calibration plot. The Jehi diagnostic model and the 4C mortality score were identified as promising models. Conclusion Prediction models for covid-19 are quickly entering the academic literature to support medical decision making at a time when they are urgently needed. This review indicates that almost all pubished prediction models are poorly reported, and at high risk of bias such that their reported predictive performance is probably optimistic. However, we have identified two (one diagnostic and one prognostic) promising models that should soon be validated in multiple cohorts, preferably through collaborative efforts and data sharing to also allow an investigation of the stability and heterogeneity in their performance across populations and settings. Details on all reviewed models are publicly available at https://www.covprecise.org/. Methodological guidance as provided in this paper should be followed because unreliable predictions could cause more harm than benefit in guiding clinical decisions. Finally, prediction model authors should adhere to the TRIPOD (transparent reporting of a multivariable prediction model for individual prognosis or diagnosis) reporting guideline. Systematic review registration Protocol https://osf.io/ehc47/, registration https://osf.io/wy245. Readers’ note This article is a living systematic review that will be updated to reflect emerging evidence. Updates may occur for up to two years from the date of original publication. This version is update 3 of the original article published on 7 April 2020 (BMJ 2020;369:m1328). Previous updates can be found as data supplements (https://www.bmj.com/content/369/bmj.m1328/related#datasupp). When citing this paper please consider adding the update number and date of access for clarity.

2,183 citations

Journal ArticleDOI
TL;DR: A deep learning model was developed to extract visual features from volumetric chest CT scans for the detection of coronavirus 2019 and differentiate it from community-acquired pneumonia and other lung conditions.
Abstract: Background Coronavirus disease 2019 (COVID-19) has widely spread all over the world since the beginning of 2020. It is desirable to develop automatic and accurate detection of COVID-19 using chest CT. Purpose To develop a fully automatic framework to detect COVID-19 using chest CT and evaluate its performance. Materials and Methods In this retrospective and multicenter study, a deep learning model, the COVID-19 detection neural network (COVNet), was developed to extract visual features from volumetric chest CT scans for the detection of COVID-19. CT scans of community-acquired pneumonia (CAP) and other non-pneumonia abnormalities were included to test the robustness of the model. The datasets were collected from six hospitals between August 2016 and February 2020. Diagnostic performance was assessed with the area under the receiver operating characteristic curve, sensitivity, and specificity. Results The collected dataset consisted of 4352 chest CT scans from 3322 patients. The average patient age (±standard deviation) was 49 years ± 15, and there were slightly more men than women (1838 vs 1484, respectively; P = .29). The per-scan sensitivity and specificity for detecting COVID-19 in the independent test set was 90% (95% confidence interval [CI]: 83%, 94%; 114 of 127 scans) and 96% (95% CI: 93%, 98%; 294 of 307 scans), respectively, with an area under the receiver operating characteristic curve of 0.96 (P < .001). The per-scan sensitivity and specificity for detecting CAP in the independent test set was 87% (152 of 175 scans) and 92% (239 of 259 scans), respectively, with an area under the receiver operating characteristic curve of 0.95 (95% CI: 0.93, 0.97). Conclusion A deep learning model can accurately detect coronavirus 2019 and differentiate it from community-acquired pneumonia and other lung conditions. © RSNA, 2020 Online supplemental material is available for this article.

1,505 citations

Journal ArticleDOI
TL;DR: This paper attempts to give an overview of deformable registration methods, putting emphasis on the most recent advances in the domain, and provides an extensive account of registration techniques in a systematic manner.
Abstract: Deformable image registration is a fundamental task in medical image processing. Among its most important applications, one may cite: 1) multi-modality fusion, where information acquired by different imaging devices or protocols is fused to facilitate diagnosis and treatment planning; 2) longitudinal studies, where temporal structural or anatomical changes are investigated; and 3) population modeling and statistical atlases used to study normal anatomical variability. In this paper, we attempt to give an overview of deformable registration methods, putting emphasis on the most recent advances in the domain. Additional emphasis has been given to techniques applied to medical images. In order to study image registration methods in depth, their main components are identified and studied independently. The most recent techniques are presented in a systematic fashion. The contribution of this paper is to provide an extensive account of registration techniques in a systematic manner.

1,434 citations