scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Segmentation and classification in MRI and US fetal imaging: Recent trends and future prospects.

TL;DR: This review covers state‐of‐the‐art segmentation and classification methodologies for the whole fetus and, more specifically, the fetal brain, lungs, liver, heart and placenta in magnetic resonance imaging and (3D) ultrasound for the first time.
About: This article is published in Medical Image Analysis.The article was published on 2019-01-01. It has received 70 citations till now.
Citations
More filters
Journal ArticleDOI
TL;DR: CA-Net as mentioned in this paper proposes a joint spatial attention module to make the network focus more on the foreground region and a novel channel attention module is proposed to adaptively recalibrate channel-wise feature responses and highlight the most relevant feature channels.
Abstract: Accurate medical image segmentation is essential for diagnosis and treatment planning of diseases. Convolutional Neural Networks (CNNs) have achieved state-of-the-art performance for automatic medical image segmentation. However, they are still challenged by complicated conditions where the segmentation target has large variations of position, shape and scale, and existing CNNs have a poor explainability that limits their application to clinical decisions. In this work, we make extensive use of multiple attentions in a CNN architecture and propose a comprehensive attention-based CNN (CA-Net) for more accurate and explainable medical image segmentation that is aware of the most important spatial positions, channels and scales at the same time. In particular, we first propose a joint spatial attention module to make the network focus more on the foreground region. Then, a novel channel attention module is proposed to adaptively recalibrate channel-wise feature responses and highlight the most relevant feature channels. Also, we propose a scale attention module implicitly emphasizing the most salient feature maps among multiple scales so that the CNN is adaptive to the size of an object. Extensive experiments on skin lesion segmentation from ISIC 2018 and multi-class segmentation of fetal MRI found that our proposed CA-Net significantly improved the average segmentation Dice score from 87.77% to 92.08% for skin lesion, 84.79% to 87.08% for the placenta and 93.20% to 95.88% for the fetal brain respectively compared with U-Net. It reduced the model size to around 15 times smaller with close or even better accuracy compared with state-of-the-art DeepLabv3+. In addition, it has a much higher explainability than existing networks by visualizing the attention weight maps. Our code is available at https://github.com/HiLab-git/CA-Net .

205 citations

Journal ArticleDOI
TL;DR: This work makes extensive use of multiple attentions in a CNN architecture and proposes a comprehensive attention-based CNN (CA-Net) for more accurate and explainable medical image segmentation that is aware of the most important spatial positions, channels and scales at the same time.
Abstract: Accurate medical image segmentation is essential for diagnosis and treatment planning of diseases. Convolutional Neural Networks (CNNs) have achieved state-of-the-art performance for automatic medical image segmentation. However, they are still challenged by complicated conditions where the segmentation target has large variations of position, shape and scale, and existing CNNs have a poor explainability that limits their application to clinical decisions. In this work, we make extensive use of multiple attentions in a CNN architecture and propose a comprehensive attention-based CNN (CA-Net) for more accurate and explainable medical image segmentation that is aware of the most important spatial positions, channels and scales at the same time. In particular, we first propose a joint spatial attention module to make the network focus more on the foreground region. Then, a novel channel attention module is proposed to adaptively recalibrate channel-wise feature responses and highlight the most relevant feature channels. Also, we propose a scale attention module implicitly emphasizing the most salient feature maps among multiple scales so that the CNN is adaptive to the size of an object. Extensive experiments on skin lesion segmentation from ISIC 2018 and multi-class segmentation of fetal MRI found that our proposed CA-Net significantly improved the average segmentation Dice score from 87.77% to 92.08% for skin lesion, 84.79% to 87.08% for the placenta and 93.20% to 95.88% for the fetal brain respectively compared with U-Net. It reduced the model size to around 15 times smaller with close or even better accuracy compared with state-of-the-art DeepLabv3+. In addition, it has a much higher explainability than existing networks by visualizing the attention weight maps. Our code is available at this https URL

174 citations


Cites background from "Segmentation and classification in ..."

  • ...and the placenta is important for fetal growth assessment and motion correction [41]....

    [...]

Journal ArticleDOI
TL;DR: Deep features are extracted from the inceptionv3 model, in which score vector is acquired from softmax and supplied to the quantum variational classifier (QVR) for discrimination between glioma, meningiomas, no tumor, and pituitary tumor to prove the proposed model's effectiveness.
Abstract: A brain tumor is an abnormal enlargement of cells if not properly diagnosed. Early detection of a brain tumor is critical for clinical practice and survival rates. Brain tumors arise in a variety of shapes, sizes, and features, with variable treatment options. Manual detection of tumors is difficult, time-consuming, and error-prone. Therefore, a significant requirement for computerized diagnostics systems for accurate brain tumor detection is present. In this research, deep features are extracted from the inceptionv3 model, in which score vector is acquired from softmax and supplied to the quantum variational classifier (QVR) for discrimination between glioma, meningioma, no tumor, and pituitary tumor. The classified tumor images have been passed to the proposed Seg-network where the actual infected region is segmented to analyze the tumor severity level. The outcomes of the reported research have been evaluated on three benchmark datasets such as Kaggle, 2020-BRATS, and local collected images. The model achieved greater than 90% detection scores to prove the proposed model's effectiveness.

22 citations

Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors proposed a simple yet effective residual learning diagnosis system (RLDS) for diagnosing fetal CHD to improve diagnostic accuracy, which adopts convolutional neural networks to extract discriminative features of the fetal cardiac anatomical structures.

20 citations

Journal ArticleDOI
TL;DR: The results suggest that the model used has a high potential to help cardiologists complete the initial screening for fetal congenital heart disease and a strong correlation between the predicted septal defects and ground truth as a mean average precision (mAP).
Abstract: Accurate screening for septal defects is important for supporting radiologists’ interpretative work. Some previous studies have proposed semantic segmentation and object detection approaches to carry out fetal heart detection; unfortunately, the models could not segment different objects of the same class. The semantic segmentation method segregates regions that only contain objects from the same class. In contrast, the fetal heart may contain multiple objects, such as the atria, ventricles, valves, and aorta. Besides, blurry boundaries (shadows) or a lack of consistency in the acquisition ultrasonography can cause wide variations. This study utilizes Mask-RCNN (MRCNN) to handle fetal ultrasonography images and employ it to detect and segment defects in heart walls with multiple objects. To our knowledge, this is the first study involving a medical application for septal defect detection using instance segmentation. The use of MRCNN architecture with ResNet50 as a backbone and a 0.0001 learning rate allows for two times faster training of the model on fetal heart images compared to other object detection methods, such as Faster-RCNN (FRCNN). We demonstrate a strong correlation between the predicted septal defects and ground truth as a mean average precision (mAP). As shown in the results, the proposed MRCNN model achieves good performance in multiclass detection of the heart chamber, with 97.59% for the right atrium, 99.67% for the left atrium, 86.17% for the left ventricle, 98.83% for the right ventricle, and 99.97% for the aorta. We also report competitive results for the defect detection of holes in the atria and ventricles via semantic and instance segmentation. The results show that the mAP for MRCNN is about 99.48% and 82% for FRCNN. We suggest that evaluation and prediction with our proposed model provide reliable detection of septal defects, including defects in the atria, ventricles, or both. These results suggest that the model used has a high potential to help cardiologists complete the initial screening for fetal congenital heart disease.

19 citations


Cites background or methods from "Segmentation and classification in ..."

  • ...Unfortunately, such methods (with threshold-based techniques, for example) yield the best results when the regions of interest in an image exhibit a massive difference in strength from the background of the image, but this results in more similar images with problems, dramatically reducing the efficiency and decreasing the applicability of these methods [6], [27]....

    [...]

  • ...It can aid doctors in making more accurate treatment plans [27]....

    [...]

  • ...The segmentation process is the key to exploring fetal heart abnormalities, especially defect conditions [27]....

    [...]

References
More filters
Journal ArticleDOI
TL;DR: New insights and technology have become available that can greatly advance the understanding of the genetic factors that contribute to CHD and the discovery of regulatory regions of key (heart) developmental genes and the occurrence of variations and mutations within, in the setting of CHD.
Abstract: Congenital heart disease (CHD) is the most common type of birth defect. The advent of corrective cardiac surgery and the increase in knowledge concerning the longitudinal care of patients with CHD has led to a spectacular increase in life expectancy. Therefore, >90% of children with CHD, who survive the first year of life, will live into adulthood. The etiology of CHD is complex and is associated with both environmental and genetic causes. CHD is a genetically heterogeneous disease that is associated with long-recognized chromosomal abnormalities, as well as with mutation in numerous (developmental) genes. Nevertheless, the genetic factors underlying CHD have remained largely elusive, and it is important to realize that in the far majority of CHD patients no causal mutation or chromosomal abnormality is identified. However, new insights (alternative inheritance paradigms) and technology (next-generation sequencing) have become available that can greatly advance our understanding of the genetic factors that contribute to CHD; these will be discussed in this review. Moreover, we will focus on the discovery of regulatory regions of key (heart) developmental genes and the occurrence of variations and mutations within, in the setting of CHD.

51 citations

Journal ArticleDOI
TL;DR: A framework for tracking the key variables that describe the content of each frame of freehand 2D ultrasound scanning videos of the healthy fetal heart is presented, an important first step towards developing tools that can assist with CHD detection in abnormal cases.

48 citations

Journal ArticleDOI
TL;DR: The authors report the design of the first automatic solution, called "intelligent scanning" (IS), for selecting SPGS and performing biometric measurements using real-time 2D US, and prove that the IS precision is in the range of interobserver variability.
Abstract: Purpose: To assist radiologists and decrease interobserver variability when using 2D ultrasonography(US) to locate the standardized plane of early gestational sac (SPGS) and to perform gestational sac (GS) biometric measurements. Methods: In this paper, the authors report the design of the first automatic solution, called “intelligent scanning” (IS), for selecting SPGS and performing biometric measurements using real-time 2D US. First, the GS is efficiently and precisely located in each ultrasound frame by exploiting a coarse to fine detection scheme based on the training of two cascade AdaBoost classifiers. Next, the SPGS are automatically selected by eliminating false positives. This is accomplished using local context information based on the relative position of anatomies in the image sequence. Finally, a database-guided multiscale normalized cuts algorithm is proposed to generate the initial contour of the GS, based on which the GS is automatically segmented for measurement by a modified snake model. Results: This system was validated on 31 ultrasound videos involving 31 pregnant volunteers. The differences between system performance and radiologist performance with respect to SPGS selection and length and depth (diameter) measurements are 7.5% ± 5.0%, 5.5% ± 5.2%, and 6.5% ± 4.6%, respectively. Additional validations prove that the IS precision is in the range of interobserver variability. Our system can display the SPGS along with biometric measurements in approximately three seconds after the video ends, when using a 1.9 GHz dual-core computer. Conclusions: IS of the GS from 2D real-time US is a practical, reproducible, and reliable approach.

48 citations

Proceedings ArticleDOI
28 Jun 2009
TL;DR: A fully automated segmentation method to localize the eyes and segment the skull bone content (SBC) of fetuses and a validation of the proposed method demonstrated a high accuracy for eyes and SBC extraction.
Abstract: Recent improvements of fetal MRI acquisitions now allow three-dimensional segmentation of fetal structures, to extract biometrical measures for pregnancy follow-up. Automation of the segmentation process remains a difficult challenge, given the complexity of the fetal organs and their spatial organization. As a starting point, we propose in this paper a fully automated segmentation method to localize the eyes and segment the skull bone content (SBC). Priors, embedding contrast, morphological and biometrical information, are used to assist the segmentation process. A validation of the proposed segmentation method, on 24 MRI volumes of fetuses between 30 and 35 gestational weeks, demonstrated a high accuracy for eyes and SBC extraction.

47 citations

Proceedings ArticleDOI
18 Apr 2017
TL;DR: This work shows that feasible results compared to ground truth were obtained that could form the basis of a fully automatic segmentation method for segmentation of 3D ultrasound of the placenta.
Abstract: Placental volume measured with 3D ultrasound in the first trimester has been shown to be correlated to adverse pregnancy outcomes. This could potentially be used as a screening test to predict the “at risk” pregnancy. However, manual segmentation whilst previously shown to be accurate and repeatable is very time consuming and semi-automated methods still require operator input. To generate a screening tool, fully automated placental segmentation is required. In this work, a deep convolutional neural network (cNN), DeepMedic, was trained using the output of the semi-automated Random Walker method as ground truth. 300 3D ultrasound scans of first trimester placentas were used to train, validate and test the cNN. Compared against the semi-automated segmentation, resultant median (1st Quartile, 3rd Quartile) Dice Similarity Coefficient was 0.73 (0.66, 0.76). The median (1st Quartile, 3rd Quartile) Hausdorff distance was 27 mm (18 mm, 36 mm). We present the first attempt at using a deep cNN for segmentation of 3D ultrasound of the placenta. This work shows that feasible results compared to ground truth were obtained that could form the basis of a fully automatic segmentation method.

46 citations