scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Segmentation and classification in MRI and US fetal imaging: Recent trends and future prospects.

TL;DR: This review covers state‐of‐the‐art segmentation and classification methodologies for the whole fetus and, more specifically, the fetal brain, lungs, liver, heart and placenta in magnetic resonance imaging and (3D) ultrasound for the first time.
About: This article is published in Medical Image Analysis.The article was published on 2019-01-01. It has received 70 citations till now.
Citations
More filters
Journal ArticleDOI
TL;DR: CA-Net as mentioned in this paper proposes a joint spatial attention module to make the network focus more on the foreground region and a novel channel attention module is proposed to adaptively recalibrate channel-wise feature responses and highlight the most relevant feature channels.
Abstract: Accurate medical image segmentation is essential for diagnosis and treatment planning of diseases. Convolutional Neural Networks (CNNs) have achieved state-of-the-art performance for automatic medical image segmentation. However, they are still challenged by complicated conditions where the segmentation target has large variations of position, shape and scale, and existing CNNs have a poor explainability that limits their application to clinical decisions. In this work, we make extensive use of multiple attentions in a CNN architecture and propose a comprehensive attention-based CNN (CA-Net) for more accurate and explainable medical image segmentation that is aware of the most important spatial positions, channels and scales at the same time. In particular, we first propose a joint spatial attention module to make the network focus more on the foreground region. Then, a novel channel attention module is proposed to adaptively recalibrate channel-wise feature responses and highlight the most relevant feature channels. Also, we propose a scale attention module implicitly emphasizing the most salient feature maps among multiple scales so that the CNN is adaptive to the size of an object. Extensive experiments on skin lesion segmentation from ISIC 2018 and multi-class segmentation of fetal MRI found that our proposed CA-Net significantly improved the average segmentation Dice score from 87.77% to 92.08% for skin lesion, 84.79% to 87.08% for the placenta and 93.20% to 95.88% for the fetal brain respectively compared with U-Net. It reduced the model size to around 15 times smaller with close or even better accuracy compared with state-of-the-art DeepLabv3+. In addition, it has a much higher explainability than existing networks by visualizing the attention weight maps. Our code is available at https://github.com/HiLab-git/CA-Net .

205 citations

Journal ArticleDOI
TL;DR: This work makes extensive use of multiple attentions in a CNN architecture and proposes a comprehensive attention-based CNN (CA-Net) for more accurate and explainable medical image segmentation that is aware of the most important spatial positions, channels and scales at the same time.
Abstract: Accurate medical image segmentation is essential for diagnosis and treatment planning of diseases. Convolutional Neural Networks (CNNs) have achieved state-of-the-art performance for automatic medical image segmentation. However, they are still challenged by complicated conditions where the segmentation target has large variations of position, shape and scale, and existing CNNs have a poor explainability that limits their application to clinical decisions. In this work, we make extensive use of multiple attentions in a CNN architecture and propose a comprehensive attention-based CNN (CA-Net) for more accurate and explainable medical image segmentation that is aware of the most important spatial positions, channels and scales at the same time. In particular, we first propose a joint spatial attention module to make the network focus more on the foreground region. Then, a novel channel attention module is proposed to adaptively recalibrate channel-wise feature responses and highlight the most relevant feature channels. Also, we propose a scale attention module implicitly emphasizing the most salient feature maps among multiple scales so that the CNN is adaptive to the size of an object. Extensive experiments on skin lesion segmentation from ISIC 2018 and multi-class segmentation of fetal MRI found that our proposed CA-Net significantly improved the average segmentation Dice score from 87.77% to 92.08% for skin lesion, 84.79% to 87.08% for the placenta and 93.20% to 95.88% for the fetal brain respectively compared with U-Net. It reduced the model size to around 15 times smaller with close or even better accuracy compared with state-of-the-art DeepLabv3+. In addition, it has a much higher explainability than existing networks by visualizing the attention weight maps. Our code is available at this https URL

174 citations


Cites background from "Segmentation and classification in ..."

  • ...and the placenta is important for fetal growth assessment and motion correction [41]....

    [...]

Journal ArticleDOI
TL;DR: Deep features are extracted from the inceptionv3 model, in which score vector is acquired from softmax and supplied to the quantum variational classifier (QVR) for discrimination between glioma, meningiomas, no tumor, and pituitary tumor to prove the proposed model's effectiveness.
Abstract: A brain tumor is an abnormal enlargement of cells if not properly diagnosed. Early detection of a brain tumor is critical for clinical practice and survival rates. Brain tumors arise in a variety of shapes, sizes, and features, with variable treatment options. Manual detection of tumors is difficult, time-consuming, and error-prone. Therefore, a significant requirement for computerized diagnostics systems for accurate brain tumor detection is present. In this research, deep features are extracted from the inceptionv3 model, in which score vector is acquired from softmax and supplied to the quantum variational classifier (QVR) for discrimination between glioma, meningioma, no tumor, and pituitary tumor. The classified tumor images have been passed to the proposed Seg-network where the actual infected region is segmented to analyze the tumor severity level. The outcomes of the reported research have been evaluated on three benchmark datasets such as Kaggle, 2020-BRATS, and local collected images. The model achieved greater than 90% detection scores to prove the proposed model's effectiveness.

22 citations

Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors proposed a simple yet effective residual learning diagnosis system (RLDS) for diagnosing fetal CHD to improve diagnostic accuracy, which adopts convolutional neural networks to extract discriminative features of the fetal cardiac anatomical structures.

20 citations

Journal ArticleDOI
TL;DR: The results suggest that the model used has a high potential to help cardiologists complete the initial screening for fetal congenital heart disease and a strong correlation between the predicted septal defects and ground truth as a mean average precision (mAP).
Abstract: Accurate screening for septal defects is important for supporting radiologists’ interpretative work. Some previous studies have proposed semantic segmentation and object detection approaches to carry out fetal heart detection; unfortunately, the models could not segment different objects of the same class. The semantic segmentation method segregates regions that only contain objects from the same class. In contrast, the fetal heart may contain multiple objects, such as the atria, ventricles, valves, and aorta. Besides, blurry boundaries (shadows) or a lack of consistency in the acquisition ultrasonography can cause wide variations. This study utilizes Mask-RCNN (MRCNN) to handle fetal ultrasonography images and employ it to detect and segment defects in heart walls with multiple objects. To our knowledge, this is the first study involving a medical application for septal defect detection using instance segmentation. The use of MRCNN architecture with ResNet50 as a backbone and a 0.0001 learning rate allows for two times faster training of the model on fetal heart images compared to other object detection methods, such as Faster-RCNN (FRCNN). We demonstrate a strong correlation between the predicted septal defects and ground truth as a mean average precision (mAP). As shown in the results, the proposed MRCNN model achieves good performance in multiclass detection of the heart chamber, with 97.59% for the right atrium, 99.67% for the left atrium, 86.17% for the left ventricle, 98.83% for the right ventricle, and 99.97% for the aorta. We also report competitive results for the defect detection of holes in the atria and ventricles via semantic and instance segmentation. The results show that the mAP for MRCNN is about 99.48% and 82% for FRCNN. We suggest that evaluation and prediction with our proposed model provide reliable detection of septal defects, including defects in the atria, ventricles, or both. These results suggest that the model used has a high potential to help cardiologists complete the initial screening for fetal congenital heart disease.

19 citations


Cites background or methods from "Segmentation and classification in ..."

  • ...Unfortunately, such methods (with threshold-based techniques, for example) yield the best results when the regions of interest in an image exhibit a massive difference in strength from the background of the image, but this results in more similar images with problems, dramatically reducing the efficiency and decreasing the applicability of these methods [6], [27]....

    [...]

  • ...It can aid doctors in making more accurate treatment plans [27]....

    [...]

  • ...The segmentation process is the key to exploring fetal heart abnormalities, especially defect conditions [27]....

    [...]

References
More filters
Journal ArticleDOI
TL;DR: A novel method for automatic detection of early fetal cardiac structure from ultrasound images by combining Rayleigh-trimmed filter and anisotropic diffusion in 3-dimensional space and a despeckling method is developed to suppress the speckle noise and emphasize the motion information for subsequent cardiac structure detection.

15 citations

Book ChapterDOI
14 Sep 2017
TL;DR: A new random forest-based segmentation framework for fetal 3D ultrasound volumes, able to efficiently integrate semantic and structural information in the classification process, and introduces a new semantic features space able to encode spatial context via generalized geodesic distance transform.
Abstract: Ultrasound is the primary imaging method for prenatal screening and diagnosis of fetal anomalies. Thanks to its non-invasive and non-ionizing properties, ultrasound allows quick, safe and detailed evaluation of the unborn baby, including the estimation of the gestational age, brain and cranium development. However, the accuracy of traditional 2D fetal biometrics is dependent on operator expertise and subjectivity in 2D plane finding and manual marking. 3D ultrasound has the potential to reduce the operator dependence. In this paper, we propose a new random forest-based segmentation framework for fetal 3D ultrasound volumes, able to efficiently integrate semantic and structural information in the classification process. We introduce a new semantic features space able to encode spatial context via generalized geodesic distance transform. Unlike alternative auto-context approaches, this new set of features is efficiently integrated into the same forest using contextual trees. Finally, we use a new structured labels space as alternative to the traditional atomic class labels, able to capture morphological variability of the target organ. Here, we show the potential of this new general framework segmenting the skull in 3D fetal ultrasound volumes, significantly outperforming alternative random forest-based approaches.

14 citations

Book ChapterDOI
17 Oct 2016
TL;DR: A generic Dynamically Balanced Online Random Forest (DyBa ORF) is proposed to deal with imbalanced training data and a changing imbalance ratio, with a combination of a dynamically balanced online Bagging method and a tree growing and shrinking strategy to update the random forests.
Abstract: Interactive scribble-and-learning-based segmentation is attractive for its good performance and reduced number of user interaction. Scribbles for foreground and background are often imbalanced. With the arrival of new scribbles, the imbalance ratio may change largely. Failing to deal with imbalanced training data and a changing imbalance ratio may lead to a decreased sensitivity and accuracy for segmentation. We propose a generic Dynamically Balanced Online Random Forest (DyBa ORF) to deal with these problems, with a combination of a dynamically balanced online Bagging method and a tree growing and shrinking strategy to update the random forests. We validated DyBa ORF on UCI machine learning data sets and applied it to two different clinical applications: 2D segmentation of the placenta from fetal MRI and adult lungs from radiographic images. Experiments show it outperforms traditional ORF in dealing with imbalanced data with a changing imbalance ratio, while maintaining a comparable accuracy and a higher efficiency compared with its offline counterpart. Our results demonstrate that DyBa ORF is more suitable than existing ORF for learning-based interactive image segmentation.

14 citations

Journal ArticleDOI
01 Jan 2010
TL;DR: A new computational framework is proposed to generate 3D hybrid models of pregnant women, composed of fetus shapes segmented from medical images and a generic maternal body envelope representing a synthetic woman scaled to the dimension of the uterus.
Abstract: Purpose Numerical simulations studying the interactions between radiations and biological tissues require the use of three-dimensional models of the human anatomy at various ages and in various positions. Several detailed and flexible models exist for adults and children and have been extensively used for dosimetry. On the other hand, progress of simulation studies focusing on pregnant women and the fetus have been limited by the fact that only a small number of models exist with rather coarse anatomical details and a poor representation of the anatomical variability of the fetus shape and its position over the entire gestation.

14 citations

Book ChapterDOI
14 Sep 2014
TL;DR: An automatic method to localize the FASP from US images by exploiting the deep convolutional neural network to automatically learn the latent representation and adopting the novel knowledge transfer method to enhance the learning performance by making use of the knowledge obtained in other domain.
Abstract: Acquisition of the fetal abdominal standard plane (FASP) is crucial for prenatal ultrasound diagnosis. However, it requires a thorough knowledge of human anatomy and substantial experience. In this paper, we propose an automatic method to localize the FASP from US images. Unlike the previous methods that consider simple low-level features such as Haar features, we exploited the deep convolutional neural network to automatically learn the latent representation. In addition, we adopted the novel knowledge transfer method to enhance the learning performance by making use of the knowledge obtained in other domain. Experimental results on 219 fetal abdomen videos showed that the classification accuracy of our method was above 90%, outperforming other methods by a significant margin.

14 citations