scispace - formally typeset
Open AccessBook ChapterDOI

Fast Fully Automatic Segmentation of the Human Placenta from Motion Corrupted MRI

TLDR
A fully automatic segmentation framework of the placenta is proposed from structural T2-weighted scans of the whole uterus, as well as an extension in order to provide an intuitive pre-natal view into this vital organ.
Abstract
Recently, magnetic resonance imaging has revealed to be important for the evaluation of placenta’s health during pregnancy. Quantitative assessment of the placenta requires a segmentation, which proves to be challenging because of the high variability of its position, orientation, shape and appearance. Moreover, image acquisition is corrupted by motion artifacts from both fetal and maternal movements. In this paper we propose a fully automatic segmentation framework of the placenta from structural T2-weighted scans of the whole uterus, as well as an extension in order to provide an intuitive pre-natal view into this vital organ. We adopt a 3D multi-scale convolutional neural network to automatically identify placental candidate pixels. The resulting classification is subsequently refined by a 3D dense conditional random field, so that a high resolution placental volume can be reconstructed from multiple overlapping stacks of slices. Our segmentation framework has been tested on 66 subjects at gestational ages 20–38 weeks achieving a Dice score of \(71.95\pm 19.79\,\%\) for healthy fetuses with a fixed scan sequence and \(66.89\pm 15.35\,\%\) for a cohort mixed with cases of intrauterine fetal growth restriction using varying scan parameters.

read more

Content maybe subject to copyright    Report

Fast Fully Automatic Segmentation of the
Human Placenta from Motion Corrupted MRI
Amir Alansary
1,
, Konstantinos Kamnitsas
1
, Alice Davidson
2
, Rostislav
Khlebnikov
2
, Martin Rajchl
1
, Christina Malamateniou
2
, Mary Rutherford
2
,
Joseph V. Hajnal
2
, Ben Glocker
1
, Daniel Rueckert
1
, and Bernhard Kainz
1
1
Department of Computing, Imperial College London, UK
2
King’s College London, Division of Imaging Sciences, London, UK
Abstract. Recently, magnetic resonance imaging has revealed to be im-
portant for the evaluation of placenta’s health during pregnancy. Quanti-
tative assessment of the placenta requires a segmentation, which proves
to be challenging because of the high variability of its position, orien-
tation, shape and appearance. Moreover, image acquisition is corrupted
by motion artifacts from both fetal and maternal movements. In this
paper we propose a fully automatic segmentation framework of the pla-
centa from structural T2-weighted scans of the whole uterus, as well as
an extension in order to provide an intuitive pre-natal view into this
vital organ. We adopt a 3D multi-scale convolutional neural network to
automatically identify placental candidate pixels. The resulting classifi-
cation is subsequently refined by a 3D dense conditional random field, so
that a high resolution placental volume can be reconstructed from mul-
tiple overlapping stacks of slices. Our segmentation framework has been
tested on 66 subjects at gestational ages 20–38 weeks achieving a Dice
score of 71.95 ± 19.79% for healthy fetuses with a fixed scan sequence
and 66.89 ± 15.35% for a cohort mixed with cases of intrauterine fetal
growth restriction using varying scan parameters.
1 Introduction
The functions of the placenta affect the fetal birth weight, growth, prematurity,
and neuro-development since it controls the transmission of nutrients from the
maternal to the fetal circulatory system. Recent work [8] has shown that mag-
netic resonance imaging (MRI) can be used for the evaluation of the placenta
during both normal and high-risk pregnancies. Particularly, quantitative mea-
surements such as placental volume and surface attachment to the uterine wall,
are required for identifying abnormalities. In addition, recording the structural
appearance (e.g., placental cotyledons and shape) is essential for clinical qual-
itative analysis. Moreover, the placenta is usually examined after birth, on a
flat surface providing a standard representation for obstetricians. Flat cutting
planes, as common in radiology, show only a small part of the placenta. A 3D
visualization is considered useful in particular for cases that require preoperative
planning or surgical navigation (e.g. treatment of twin-to-twin transfusion syn-
drome). Hence, fully automatic 3D segmentation, correction of motion artifacts,

2 A. Alansary et al.
and visualization is highly desirable for an efficient pre-natal examination of the
placenta in the clinical practice.
Fast MRI acquisition techniques (single shot fast spin echo ssFSE) allow
acquiring single 2D images of the moving uterus and fetus fast enough so that
motion does not affect the image quality. However, 3D data acquisition and
subsequent automatic segmentation is challenging because maternal respiratory
motion and fetal movements displace the overall anatomy, which causes motion
artifacts between individual slices as shown in Fig. 1. Furthermore, a high vari-
ability of the placenta’s position, orientation, thickness, shape and appearance
inhibits conventional image analysis approaches to be successful.
(a) Axial (b) Sagittal (c) Coronal
Fig. 1. Three orthogonal 2D planes from a motion corrupted 3D stack of slices showing
a delineated placenta. The native scan orientation (a) shows no motion artifacts, while
(b) and (c) do.
Related work: To the best of our knowledge, fully automatic segmenta-
tion of the placenta from MRI has not been investigated before. Most previous
work in fetal MRI was focused on brain segmentation [2] and very recently has
been extended to localize other fetal organs [6]. These methods rely on engineer-
ing visual features for training a classifier such as random forests. Stevenson et
al. [9] present a semi-automatic approach for measuring the placental volume
from motion free 3D ultrasound with a random walker (RW) algorithm. Their
method shows a good inter-observer reproducibility but requires extensive user
interaction and several minutes per segmentation. Even though ultrasound is
fast enough to acquire a motion free volume, the lack of structural information
and weak tissue gradients make it only useful for volume measurements. Wang
et al. [12] present an interactive method for the segmentation of the placenta
from MR images, which requires user interaction to initialize the localization of
the placenta. Their approach performs well on a small cohort of six subjects but
shows a user-dependent variability in segmentation accuracy.
Contribution: In this paper we propose for the first time a fully automatic
segmentation framework for the placenta from motion corrupted fetal MRI. The
proposed framework adopts convolutional neural networks (CNNs) as a strong
classifier for image segmentation followed by a conditional random field (CRF)
for refinement. Our approach scales well to real clinical applications. We propose
how to use the resulting placental mask as initialization for slice-to-volume regis-
tration (SVR) techniques to compensate for motion artifacts. We also show how

Automatic Segmentation of the Placenta 3
the resulting reconstructed volume can be used to provide a novel standardized
view into the placental structures by applying shape skeleton extraction and
curved planar reformation for shape abstraction.
2 Method
The proposed approach combines a 3D multi-scale CNN architecture for seg-
mentation with a 3D dense CRF for segmentation refinement. This approach
can be extended to compensate for motion and to provide a clinically useful
visualization. Figure 2 shows an overview of the proposed framework.
One Input
Stack
Original input
Down-sampled
N-Stacks
3D multi-scale CNN
3D Dense CRF
Super-
resolution
EM
evaluation
2D-3D
registration
Placenta Reconstruction
Placenta
Visualization
Curved planar
reformation
Placenta Segmentation
Segmentation
Transformation
Patch Generation
Fig. 2. The proposed framework for automatic placenta segmentation with extensions
for motion correction and visualization.
Placenta segmentation: We adopt a 3D deep multi-scale CNN architec-
ture [4] that is 11-layers deep and consists of two pathways to segment the
placenta from the whole uterus. This multi-scale architecture has the advantage
of capturing larger 3D contextual information, which is essential for detecting
highly variable organs. Both pathways are complementary as the main pathway
extracts local features, whereas the second one extracts larger contextual fea-
tures. Multi-scale features are integrated efficiently by down-sampling the input
image and processing the two pathways in parallel. In order to deal with the vari-
ations of the placenta’s appearance, we apply data augmentation for training by
flipping the image around the main 3D axes (maternal orientation).
Despite the fact that the multi-scale architecture can interpret contextual
information, inference is subject to misclassification and errors. Hence, we apply
a CRF to penalize inconsistencies of the segmentation by regularizing classifi-
cation priors with the relational consistency of their neighbors. We use a 3D
fully connected CRF model [7, 4] which applies a linear combination of Gaus-
sian kernels to define the pairwise edge potentials. It is defined as E(x) =

4 A. Alansary et al.
P
iN
U(x
i
)+
P
i<j
V (x
i
, x
j
), where i and j are pixel indexes. The unary poten-
tial U is given by the probabilistic predictions of the CNN classification. Whereas
the pairwise potential V is defined by
V (x
i
, x
j
) = µ(x
i
, x
j
)
K
X
m=1
ω
1
e
|
p
i
p
j
|
2
2θ
2
α
|
I
i
I
j
|
2
2θ
2
β
+ ω
2
e
|
p
i
p
j
|
2
2θ
2
γ
!
,
where I and p are intensity and position values. µ(x
i
, x
j
) is a simple label com-
patibility function given by the Potts model [x
i
6= x
j
]. Here, ω
1
controls the
importance of the appearance of nearby pixels to have similar labels. ω
2
controls
the size of the smoothness kernel for removing isolated regions. θ
α
, θ
β
and θ
γ
are
used to adjust the degree of similarity and proximity. We have chosen the config-
uration parameters heuristically similar to [4]. Although this tissue classification
approach is capable of segmenting the placenta robustly, the segmentation is still
subject to inter-slice motion artifacts.
Placenta segmentation recovery: To tackle these motion artifacts caused
by fetal and maternal movements we combine our segmentation framework with
flexible motion compensation algorithm based on patch-to-volume registration
(PVR) [3]. This technique requires multiple orthogonal stacks of 2D slices to
provide a better reconstruction quality. It is based on splitting the input data
into overlapping square patches or superpixels [1]. The motion-free 3D image is
then reconstructed from the extracted patches using iterative super-resolution
and 2D/3D registration steps. The motion-corrupted and misaligned patches
are excluded during the reconstruction using an EM-based outliers rejection
model. We extend this process to allow propagation of the placental mask to the
final reconstruction through evaluating an MR specific point spread function,
registration-based transformation, and the learned confidence weights.
Placenta visualization: We present an extension of our placenta segmen-
tation pipeline based on a novel application of shape abstraction using a flexible
cutting plane. It is supported by a mean-curvature flow skeleton [10] generated
from the triangulated polygonal mesh of the placenta segmentation and textured
similar to curved planar reformation [5], see Figure 3. Although this part is not
evaluated thoroughly, clinicians revealed that such a representation is potentially
desirable since it compares well to a flattened placenta after birth.
3 Experimental Results
Data: We test our approach on two dissimilar datasets that are different in
health status, gestational ages and acquired using different scanning parame-
ters. All scans have been ethically approved. Dataset I contains 44 MR scans
of healthy fetuses at gestational age between 20–25 weeks. The data has been
acquired on a Philips Achieva 1.5T, the mother lying 20
tilt on the left side
to avoid pressure on the inferior vena cava. ssFSE T2-weighted sequences are
used to acquire stacks of images that are aligned to the main axes of the fetus.
Usually three to six stacks are acquired for the whole womb and the placenta

Automatic Segmentation of the Placenta 5
(a)
(b)
(c)
(d)
(f)
(e)
Fig. 3. A native plane (a) cannot represent all structures of the placenta at once.
Therefore, we use our segmentation method (b), correct the motion in this area us-
ing [3], project the placenta mask into the the resulting isotropically resolved volume
(c), extract the mean curvature flow skeleton [10] (black lines in (d)), use the resulting
points to support a curved surface plane (e) and visualize this plane with curved pla-
nar reformation [5] (f). The plane in (f) covers only relevant areas, hence gray value
mapping can be adjusted automatically to emphasis placental structures.
with a voxel size of 1.25 × 1.25 × 2.50mm. Dataset II contains 22 MR scans of
healthy fetuses and fetuses with intrauterine fetal growth restriction (IUGR) at
gestational age between 20–38 weeks. The data was acquired with a 1.5T Philips
MRI system using ssFSE sequences and a voxel size of 0.8398 × 0.8398 × 4mm.
Ground truth labels for both datasets have been obtained manually slice-by-slice
in 2D views from the original motion-corrupted stacks by a clinical expert.
Experiments: The proposed segmentation framework is evaluated using
three main metrics: Dice similarity coefficient to measure the accuracy of the seg-
mentation, absolute volume similarity to measure the volumetric error between
the segmented and the ground truth volumes, and average Hausdorff distance as
a distance error metric between the segmented and the ground truth surfaces.
We evaluate in a first experiment [exp-1] the automatic segmentation of the
placenta on Dataset I using a 4-fold cross validation (11 test patients and 33
training patients per fold). The main aim of this experiment is to evaluate the
performance of our segmentation framework on a healthy homogeneous dataset.
The results for this experiment are 71.95±19.79% Dice, 30.92±33.68% absolute
volume difference, and 4.94 ± 6.93mm average Hausdorff distance.
In a second experiment [exp-2], we train the CNN using the whole 44 subject
from Dataset I and test it on the 22 subjects from Dataset II. Where datasets I
and II are significantly different using different scanners and scanning parame-
ters. In addition, the gestational age range of the fetuses in Dataset II is wider,
which has a big influence on the fetal body and placenta sizes. Hence we test the
performance of our framework when it is used to test data from a different envi-
ronment. The results of this experiment are 56.78 ± 21.86% Dice, 48.19 ±46.96%
absolute volume difference, and 8.41 ± 7.1mm average Hausdorff distance.
To resemble a realistic transfer learning application we have designed a third
experiment [exp-3] using both datasets. The network is evaluated with 2-fold
cross validation, 10 test subjects from Dataset II, and 44+10 training subjects
from Dataset I and Dataset II. This experiment yielded a Dice accuracy of
66.89 ± 15.35%, an absolute volume difference of 33.05 ± 30.71%, and an average
Hausdorff distance of 5.8±4.24mm. Detailed results are shown in Fig. 4. Training
one fold takes approximately 40 hours and inference can be done within 2 minutes
on an Nvidia Tesla K40.

Figures
Citations
More filters
Journal ArticleDOI

A survey on deep learning in medical image analysis

TL;DR: This paper reviews the major deep learning concepts pertinent to medical image analysis and summarizes over 300 contributions to the field, most of which appeared in the last year, to survey the use of deep learning for image classification, object detection, segmentation, registration, and other tasks.
Journal ArticleDOI

DeepIGeoS: A Deep Interactive Geodesic Framework for Medical Image Segmentation

TL;DR: A deep learning-based interactive segmentation method to improve the results obtained by an automatic CNN and to reduce user interactions during refinement for higher accuracy, and obtains comparable and even higher accuracy with fewer user interventions and less time compared with traditional interactive methods.
Posted Content

Ensembles of Multiple Models and Architectures for Robust Brain Tumour Segmentation

TL;DR: This paper explores Ensembles of Multiple Models and Architectures (EMMA) for robust performance through aggregation of predictions from a wide range of methods to reduce the influence of the meta-parameters of individual models and the risk of overfitting the configuration to a particular database.
Book ChapterDOI

Ensembles of Multiple Models and Architectures for Robust Brain Tumour Segmentation

TL;DR: The Ensembles of Multiple Models and Architectures (EMMA) as mentioned in this paper was proposed to reduce the influence of the meta-parameters of individual models and the risk of overfitting the configuration to a particular database.
Journal ArticleDOI

A deep learning framework for supporting the classification of breast lesions in ultrasound images.

TL;DR: The proposed method can work in tandem with human radiologists to improve performance, which is a fundamental purpose of computer-aided diagnosis.
References
More filters
Journal ArticleDOI

SLIC Superpixels Compared to State-of-the-Art Superpixel Methods

TL;DR: A new superpixel algorithm is introduced, simple linear iterative clustering (SLIC), which adapts a k-means clustering approach to efficiently generate superpixels and is faster and more memory efficient, improves segmentation performance, and is straightforward to extend to supervoxel generation.
Proceedings Article

Efficient Inference in Fully Connected CRFs with Gaussian Edge Potentials

TL;DR: This paper considers fully connected CRF models defined on the complete set of pixels in an image and proposes a highly efficient approximate inference algorithm in which the pairwise edge potentials are defined by a linear combination of Gaussian kernels.
Journal ArticleDOI

Efficient Multi-Scale 3D CNN with Fully Connected CRF for Accurate Brain Lesion Segmentation

TL;DR: An efficient and effective dense training scheme which joins the processing of adjacent image patches into one pass through the network while automatically adapting to the inherent class imbalance present in the data, and improves on the state-of-the‐art for all three applications.
Proceedings ArticleDOI

CPR - curved planar reformation

TL;DR: A tube-phantom was scanned with computed tomography (CT) to illustrate the properties of the different CPR methods and enhancements to these methods are introduced: thick-CPR, rotating-CNR and multi-path-C CPR.
Journal ArticleDOI

Mean Curvature Skeletons

TL;DR: By analyzing the differential characteristics of the flow, it is revealed that MCF locally increases shape anisotropy, which justifies the use of curvature motion for skeleton computation, and leads to the generation of what is called “mean curvature skeletons”.
Related Papers (5)
Frequently Asked Questions (21)
Q1. What are the contributions in "Fast fully automatic segmentation of the human placenta from motion corrupted mri" ?

In this paper the authors propose a fully automatic segmentation framework of the placenta from structural T2-weighted scans of the whole uterus, as well as an extension in order to provide an intuitive pre-natal view into this vital organ. The authors adopt a 3D multi-scale convolutional neural network to automatically identify placental candidate pixels. 

Moreover, the authors extend their framework scope to real clinical applications by compensating motion artifacts using slice to volume registration techniques, as well as providing a novel standardized view into the placental structures using skeleton extraction and curved planar reformation. In future work the authors will investigate the potential use of the standardized placenta views for image-based classification and automatic detection of abnormalities. 

quantitative measurements such as placental volume and surface attachment to the uterine wall, are required for identifying abnormalities. 

The network is evaluated with 2-fold cross validation, 10 test subjects from Dataset II, and 44+10 training subjects from Dataset The authorand Dataset II. 

The proposed framework adopts convolutional neural networks (CNNs) as a strong classifier for image segmentation followed by a conditional random field (CRF) for refinement. 

Recent work [8] has shown that magnetic resonance imaging (MRI) can be used for the evaluation of the placenta during both normal and high-risk pregnancies. 

Even though ultrasound is fast enough to acquire a motion free volume, the lack of structural information and weak tissue gradients make it only useful for volume measurements. 

In order to deal with the variations of the placenta’s appearance, the authors apply data augmentation for training by flipping the image around the main 3D axes (maternal orientation). 

The authors use a 3D fully connected CRF model [7, 4] which applies a linear combination of Gaussian kernels to define the pairwise edge potentials. 

This multi-scale architecture has the advantage of capturing larger 3D contextual information, which is essential for detecting highly variable organs. 

3D data acquisition and subsequent automatic segmentation is challenging because maternal respiratory motion and fetal movements displace the overall anatomy, which causes motion artifacts between individual slices as shown in Fig. 

The proposed approach combines a 3D multi-scale CNN architecture for segmentation with a 3D dense CRF for segmentation refinement. 

To tackle these motion artifacts caused by fetal and maternal movements the authors combine their segmentation framework with flexible motion compensation algorithm based on patch-to-volume registration (PVR) [3]. 

Dataset II contains 22 MR scans of healthy fetuses and fetuses with intrauterine fetal growth restriction (IUGR) at gestational age between 20–38 weeks. 

Most previous work in fetal MRI was focused on brain segmentation [2] and very recently has been extended to localize other fetal organs [6]. 

It is supported by a mean-curvature flow skeleton [10] generated from the triangulated polygonal mesh of the placenta segmentation and textured similar to curved planar reformation [5], see Figure 3. 

Despite the fact that the multi-scale architecture can interpret contextual information, inference is subject to misclassification and errors. 

In a second experiment [exp-2], the authors train the CNN using the whole 44 subject from Dataset The authorand test it on the 22 subjects from Dataset II. 

The authors also show howthe resulting reconstructed volume can be used to provide a novel standardized view into the placental structures by applying shape skeleton extraction and curved planar reformation for shape abstraction. 

Although this tissue classification approach is capable of segmenting the placenta robustly, the segmentation is still subject to inter-slice motion artifacts. 

In this paper the authors propose for the first time a fully automatic segmentation framework for the placenta from motion corrupted fetal MRI.