scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Segmentation and classification in MRI and US fetal imaging: Recent trends and future prospects.

TL;DR: This review covers state‐of‐the‐art segmentation and classification methodologies for the whole fetus and, more specifically, the fetal brain, lungs, liver, heart and placenta in magnetic resonance imaging and (3D) ultrasound for the first time.
About: This article is published in Medical Image Analysis.The article was published on 2019-01-01. It has received 70 citations till now.
Citations
More filters
Journal ArticleDOI
TL;DR: CA-Net as mentioned in this paper proposes a joint spatial attention module to make the network focus more on the foreground region and a novel channel attention module is proposed to adaptively recalibrate channel-wise feature responses and highlight the most relevant feature channels.
Abstract: Accurate medical image segmentation is essential for diagnosis and treatment planning of diseases. Convolutional Neural Networks (CNNs) have achieved state-of-the-art performance for automatic medical image segmentation. However, they are still challenged by complicated conditions where the segmentation target has large variations of position, shape and scale, and existing CNNs have a poor explainability that limits their application to clinical decisions. In this work, we make extensive use of multiple attentions in a CNN architecture and propose a comprehensive attention-based CNN (CA-Net) for more accurate and explainable medical image segmentation that is aware of the most important spatial positions, channels and scales at the same time. In particular, we first propose a joint spatial attention module to make the network focus more on the foreground region. Then, a novel channel attention module is proposed to adaptively recalibrate channel-wise feature responses and highlight the most relevant feature channels. Also, we propose a scale attention module implicitly emphasizing the most salient feature maps among multiple scales so that the CNN is adaptive to the size of an object. Extensive experiments on skin lesion segmentation from ISIC 2018 and multi-class segmentation of fetal MRI found that our proposed CA-Net significantly improved the average segmentation Dice score from 87.77% to 92.08% for skin lesion, 84.79% to 87.08% for the placenta and 93.20% to 95.88% for the fetal brain respectively compared with U-Net. It reduced the model size to around 15 times smaller with close or even better accuracy compared with state-of-the-art DeepLabv3+. In addition, it has a much higher explainability than existing networks by visualizing the attention weight maps. Our code is available at https://github.com/HiLab-git/CA-Net .

205 citations

Journal ArticleDOI
TL;DR: This work makes extensive use of multiple attentions in a CNN architecture and proposes a comprehensive attention-based CNN (CA-Net) for more accurate and explainable medical image segmentation that is aware of the most important spatial positions, channels and scales at the same time.
Abstract: Accurate medical image segmentation is essential for diagnosis and treatment planning of diseases. Convolutional Neural Networks (CNNs) have achieved state-of-the-art performance for automatic medical image segmentation. However, they are still challenged by complicated conditions where the segmentation target has large variations of position, shape and scale, and existing CNNs have a poor explainability that limits their application to clinical decisions. In this work, we make extensive use of multiple attentions in a CNN architecture and propose a comprehensive attention-based CNN (CA-Net) for more accurate and explainable medical image segmentation that is aware of the most important spatial positions, channels and scales at the same time. In particular, we first propose a joint spatial attention module to make the network focus more on the foreground region. Then, a novel channel attention module is proposed to adaptively recalibrate channel-wise feature responses and highlight the most relevant feature channels. Also, we propose a scale attention module implicitly emphasizing the most salient feature maps among multiple scales so that the CNN is adaptive to the size of an object. Extensive experiments on skin lesion segmentation from ISIC 2018 and multi-class segmentation of fetal MRI found that our proposed CA-Net significantly improved the average segmentation Dice score from 87.77% to 92.08% for skin lesion, 84.79% to 87.08% for the placenta and 93.20% to 95.88% for the fetal brain respectively compared with U-Net. It reduced the model size to around 15 times smaller with close or even better accuracy compared with state-of-the-art DeepLabv3+. In addition, it has a much higher explainability than existing networks by visualizing the attention weight maps. Our code is available at this https URL

174 citations


Cites background from "Segmentation and classification in ..."

  • ...and the placenta is important for fetal growth assessment and motion correction [41]....

    [...]

Journal ArticleDOI
TL;DR: Deep features are extracted from the inceptionv3 model, in which score vector is acquired from softmax and supplied to the quantum variational classifier (QVR) for discrimination between glioma, meningiomas, no tumor, and pituitary tumor to prove the proposed model's effectiveness.
Abstract: A brain tumor is an abnormal enlargement of cells if not properly diagnosed. Early detection of a brain tumor is critical for clinical practice and survival rates. Brain tumors arise in a variety of shapes, sizes, and features, with variable treatment options. Manual detection of tumors is difficult, time-consuming, and error-prone. Therefore, a significant requirement for computerized diagnostics systems for accurate brain tumor detection is present. In this research, deep features are extracted from the inceptionv3 model, in which score vector is acquired from softmax and supplied to the quantum variational classifier (QVR) for discrimination between glioma, meningioma, no tumor, and pituitary tumor. The classified tumor images have been passed to the proposed Seg-network where the actual infected region is segmented to analyze the tumor severity level. The outcomes of the reported research have been evaluated on three benchmark datasets such as Kaggle, 2020-BRATS, and local collected images. The model achieved greater than 90% detection scores to prove the proposed model's effectiveness.

22 citations

Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors proposed a simple yet effective residual learning diagnosis system (RLDS) for diagnosing fetal CHD to improve diagnostic accuracy, which adopts convolutional neural networks to extract discriminative features of the fetal cardiac anatomical structures.

20 citations

Journal ArticleDOI
TL;DR: The results suggest that the model used has a high potential to help cardiologists complete the initial screening for fetal congenital heart disease and a strong correlation between the predicted septal defects and ground truth as a mean average precision (mAP).
Abstract: Accurate screening for septal defects is important for supporting radiologists’ interpretative work. Some previous studies have proposed semantic segmentation and object detection approaches to carry out fetal heart detection; unfortunately, the models could not segment different objects of the same class. The semantic segmentation method segregates regions that only contain objects from the same class. In contrast, the fetal heart may contain multiple objects, such as the atria, ventricles, valves, and aorta. Besides, blurry boundaries (shadows) or a lack of consistency in the acquisition ultrasonography can cause wide variations. This study utilizes Mask-RCNN (MRCNN) to handle fetal ultrasonography images and employ it to detect and segment defects in heart walls with multiple objects. To our knowledge, this is the first study involving a medical application for septal defect detection using instance segmentation. The use of MRCNN architecture with ResNet50 as a backbone and a 0.0001 learning rate allows for two times faster training of the model on fetal heart images compared to other object detection methods, such as Faster-RCNN (FRCNN). We demonstrate a strong correlation between the predicted septal defects and ground truth as a mean average precision (mAP). As shown in the results, the proposed MRCNN model achieves good performance in multiclass detection of the heart chamber, with 97.59% for the right atrium, 99.67% for the left atrium, 86.17% for the left ventricle, 98.83% for the right ventricle, and 99.97% for the aorta. We also report competitive results for the defect detection of holes in the atria and ventricles via semantic and instance segmentation. The results show that the mAP for MRCNN is about 99.48% and 82% for FRCNN. We suggest that evaluation and prediction with our proposed model provide reliable detection of septal defects, including defects in the atria, ventricles, or both. These results suggest that the model used has a high potential to help cardiologists complete the initial screening for fetal congenital heart disease.

19 citations


Cites background or methods from "Segmentation and classification in ..."

  • ...Unfortunately, such methods (with threshold-based techniques, for example) yield the best results when the regions of interest in an image exhibit a massive difference in strength from the background of the image, but this results in more similar images with problems, dramatically reducing the efficiency and decreasing the applicability of these methods [6], [27]....

    [...]

  • ...It can aid doctors in making more accurate treatment plans [27]....

    [...]

  • ...The segmentation process is the key to exploring fetal heart abnormalities, especially defect conditions [27]....

    [...]

References
More filters
Posted Content
TL;DR: A novel method based on convolutional neural networks is proposed, which can automatically detect 13 fetal standard views in freehand 2-D ultrasound data as well as provide a localization of the fetal structures via a bounding box while providing optimal output for the localization task.
Abstract: Identifying and interpreting fetal standard scan planes during 2D ultrasound mid-pregnancy examinations are highly complex tasks which require years of training. Apart from guiding the probe to the correct location, it can be equally difficult for a non-expert to identify relevant structures within the image. Automatic image processing can provide tools to help experienced as well as inexperienced operators with these tasks. In this paper, we propose a novel method based on convolutional neural networks which can automatically detect 13 fetal standard views in freehand 2D ultrasound data as well as provide a localisation of the fetal structures via a bounding box. An important contribution is that the network learns to localise the target anatomy using weak supervision based on image-level labels only. The network architecture is designed to operate in real-time while providing optimal output for the localisation task. We present results for real-time annotation, retrospective frame retrieval from saved videos, and localisation on a very large and challenging dataset consisting of images and video recordings of full clinical anomaly screenings. We found that the proposed method achieved an average F1-score of 0.798 in a realistic classification experiment modelling real-time detection, and obtained a 90.09% accuracy for retrospective frame retrieval. Moreover, an accuracy of 77.8% was achieved on the localisation task.

98 citations

Journal ArticleDOI
TL;DR: This study supports biomodelling as a useful, and sometimes essential tool in the armamentarium of imaging techniques used for complex spinal surgery.
Abstract: Prior studies have suggested that biomodels enhance patient education, preoperative planning and intra-operative stereotaxy; however, the usefulness of biomodels compared to regular imaging modalities such as X-ray, CT and MR has not been quantified. Our objective was to quantify the surgeon’s perceptions on the usefulness of biomodels compared to standard visualisation modalities for preoperative planning and intra-operative anatomical reference. Physical biomodels were manufactured for a series of 26 consecutive patients with complex spinal pathologies using a stereolithographic technique based on CT data. The biomodels were used preoperatively for surgical planning and customising implants, and intra-operatively for anatomical reference. Following surgery, a detailed biomodel utility survey was completed by the surgeons, and informal telephone interviews were conducted with patients. Using biomodels, 21 deformity and 5 tumour cases were performed. Surgeons stated that the anatomical details were better visible on the biomodel than on other imaging modalities in 65% of cases, and exclusively visible on the biomodel in 11% of cases. Preoperative use of the biomodel led to a different decision regarding the choice of osteosynthetic materials used in 52% of cases, and the implantation site of osteosynthetic material in 74% of cases. Surgeons reported that the use of biomodels reduced operating time by a mean of 8% in tumour patients and 22% in deformity procedures. This study supports biomodelling as a useful, and sometimes essential tool in the armamentarium of imaging techniques used for complex spinal surgery.

92 citations

Book ChapterDOI
17 Oct 2016
TL;DR: A fully automated system based on convolutional neural networks which can detect twelve standard scan planes as defined by the UK fetal abnormality screening programme and can retrospectively retrieve correct scan planes with an accuracy of 71 % for cardiac views and 81 % for non-cardiac views is considered.
Abstract: Fetal mid-pregnancy scans are typically carried out according to fixed protocols. Accurate detection of abnormalities and correct biometric measurements hinge on the correct acquisition of clearly defined standard scan planes. Locating these standard planes requires a high level of expertise. However, there is a worldwide shortage of expert sonographers. In this paper, we consider a fully automated system based on convolutional neural networks which can detect twelve standard scan planes as defined by the UK fetal abnormality screening programme. The network design allows real-time inference and can be naturally extended to provide an approximate localisation of the fetal anatomy in the image. Such a framework can be used to automate or assist with scan plane selection, or for the retrospective retrieval of scan planes from recorded videos. The method is evaluated on a large database of 1003 volunteer mid-pregnancy scans. We show that standard planes acquired in a clinical scenario are robustly detected with a precision and recall of 69 % and 80 %, which is superior to the current state-of-the-art. Furthermore, we show that it can retrospectively retrieve correct scan planes with an accuracy of 71 % for cardiac views and 81 % for non-cardiac views.

92 citations

Journal ArticleDOI
TL;DR: The development of tools to construct and investigate probabilistic maps of the adult human brain from magnetic resonance imaging (MRI) has led to advances in both basic neuroscience and clinical diagnosis.
Abstract: The development of tools to construct and investigate probabilistic maps of the adult human brain from magnetic resonance imaging (MRI) has led to advances in both basic neuroscience and clinical diagnosis. These tools are increasingly being applied to brain development in adolescence and childhood, and even to neonatal and premature neonatal imaging. Even earlier in development, parallel advances in clinical fetal MRI have led to its growing use as a tool in challenging medical conditions. This has motivated new engineering developments encompassing optimal fast MRI scans and techniques derived from computer vision, the combination of which allows full 3D imaging of the moving fetal brain in utero without sedation. These promise to provide a new and unprecedented window into early human brain growth. This article reviews the developments that have led us to this point, examines the current state of the art in the fields of fast fetal imaging and motion correction, and describes the tools to analyze dynamically changing fetal brain structure. New methods to deal with developmental tissue segmentation and the construction of spatiotemporal atlases are examined, together with techniques to map fetal brain growth patterns.

88 citations

Journal ArticleDOI
TL;DR: This work proposes an automatic method to localize and segment the brain of the fetus when the image data is acquired as stacks of 2D slices with anatomy misaligned due to fetal motion, and combines this segmentation process with a robust motion correction method, enabling the segmentation to be refined as the reconstruction proceeds.

86 citations