scispace - formally typeset

Proceedings ArticleDOI

Motion compensation of ultrasonic perfusion images

23 Feb 2012-Proceedings of SPIE (International Society for Optics and Photonics)-Vol. 8320, pp 283-290

TL;DR: An approach to register an ultrasonography sequence by using a feature label map generated from the b-mode data sequence by a Markov-Random-Field based analysis, which has proven to be more robust against noise influence compared to similarity calculation based on image intensities only.

AbstractContrast-enhanced ultrasound (CEUS) is a rapid and inexpensive medical imaging technique to assess tissue perfusion with a high temporal resolution. It is composed of a sequence with ultrasound brightness values and a contrast sequence acquired simultaneously. However, the image acquisition is disturbed by various motion influences. Registration is needed to obtain reliable information of spatial correspondence and to analyze perfusion characteristics over time. We present an approach to register an ultrasonography sequence by using a feature label map. This label map is generated from the b-mode data sequence by a Markov-Random-Field (MRF) based analysis, where each location is assigned to one of the user-defined regions according to its statistical parameters. The MRF reduces the chance that outliers are represented in the label map and provides stable feature labels over the time frames. A registration consisting of rigid and non-rigid transformations is determined consecutively using the generated label map of the respective frames for similarity calculation. For evaluation, the standard deviation within specific regions in intestinal CEUS images has been measured before and after registration resulting in an average decrease of 8.6 %. Additionally, this technique has proven to be more robust against noise influence compared to similarity calculation based on image intensities only. The latter leads only to 7.6 % decrease of the standard deviation.

Summary (2 min read)

1. INTRODUCTION

  • 2D ultrasonography (US) is one of the most widely used medical imaging techniques.
  • 5 During 2D contrast enhanced US (CEUS) examinations studying perfusion, the sonographer normally holds the probe still in a particular position and orientation to image a suitable slice of tissue of interest during CA administration.
  • While this motion can normally be interpreted by well trained physicians,6 in computer-assisted analysis the different image frames of a time-dependent acquisition are required to be kept aligned in order to extract valid perfusion parameters over time.
  • The acquisition usually produces two parallel image sequences: standard b-mode∗ and the measured CA enhancement (Fig. 1).
  • Therefore, a statistically based Markov Random Field (MRF) segmentation7 of the b-mode sequence is used to produce feature images which are less subject to noise influence over time.

3. METHOD

  • The registration procedure consists of four main steps (Fig. 2).
  • After each iteration the global energy, consisting of the sum of Elocal for all sites in the image, is calculated.
  • The difference images indicate a higher accordance using the label map and a lower risk of arbitrary dissimilarity due to noise.
  • The goal is to remove coarse motion influence, mainly caused by extrinsic motion influence and patient breathing, with translation and rotation (rigid transformations) and the remaining non-linear motion with B-Spline controlled transformations with a 8× 8 point grid (Fig. 4c, 4f).
  • The transformation parameters of each frame registration are initialized with the final transformation parameters from the preceding frame for reasons of stability and efficiency.

4. RESULTS AND DISCUSSION

  • For evaluation, three data sets showing the intestinal wall have been used with a spatial resolution ranging between 200 and 500 pixels in x- and y-direction and a temporal resolution between 350 and 1000 frames (≈ 10 frames per second for b-mode and contrast sequence, respectively).
  • To measure the quality of the registration result, two experiments have been performed.
  • In their opinion this should be a first indicator of improved contrast signal correspondence over time, although this signal is still disturbed by noise and speckle artifacts and thus contains enough potential to distort the time course of contrast enhancement.
  • The perfusion curve smoothness measured after intensity-based registration marginally improved by 0.6%, for label map-based without MRF prior by 0.5% and with MRF prior enabled it could be improved by 1.9%.
  • In dataset 3 the intestinal area (Fig. 5a) has very little variation resulting in a large label area in the label map (Fig. 5b, 5c).

Did you find this useful? Give us your feedback

...read more

Content maybe subject to copyright    Report

Motion compensation of ultrasonic perfusion images
Sebastian Scafer
a
, Kim Nylund
b,c
, Odd H. Gilja
b,c
and Klaus D. onnies
a
a
Department of Simulation and Graphics, University of Magdeburg, Universit¨atsplatz 2,
39106 Magdeburg, Germany;
b
Institute of Medicine, University of Bergen, Bergen, Norway;
c
National Centre of Ultrasound in Gastroenterology, Haukeland University Hospital,
Bergen, Norway
ABSTRACT
Contrast-enhanced ultrasound (CEUS) is a rapid and inexpensive medical imaging technique to assess tissue
perfusion with a high temporal resolution. It is composed of a sequence with ultrasound brightness values and a
contrast sequence acquired simultaneously. However, the image acquisition is disturbed by various motion influ-
ences. Registration is needed to obtain reliable information of spatial correspondence and to analyze perfusion
characteristics over time. We present an approach to register an ultrasonography sequence by using a feature
label map. This label map is generated from the b-mode data sequence by a Markov-Random-Field (MRF)
based analysis, where each location is assigned to one of the user-defined regions according to its statistical
parameters. The MRF reduces the chance that outliers are represented in the label map and provides stable
feature labels over the time frames. A registration consisting of rigid and non-rigid transformations is determined
consecutively using the generated label map of the respective frames for similarity calculation. For evaluation,
the standard deviation within specific regions in intestinal CEUS images has been measured before and after
registration resulting in an average decrease of 8.6 %. Additionally, this technique has proven to be more robust
against noise influence compared to similarity calculation based on image intensities only. The latter leads only
to 7.6 % decrease of the standard deviation.
Keywords: Motion compensation, registration, 2D ultrasound, perfusion imaging, CEUS, MRF
1. INTRODUCTION
2D ultrasonography (US) is one of the most widely used medical imaging techniques. It enables immediate and
inexpensive examinations with high spatial resolution. It is well suited for imaging abdominal and thoracic organs.
There are no contraindications and the patient is not exposed to radiation. US is also used for perfusion imaging
employing contrast agents (CA), consisting of gas-filled micro bubbles that have a high degree of echogenicity
1
as they increase the US backscatter. By acquiring 2D multi-frame data, propagation and contrast uptake after
the injection of the CA can be measured. This has become a suitable tool for delineating the vascular structure
in normal and pathological tissue in order to detect primary tumors or metastases in various organs or to assess
disease activity in Crohn’s disease.
2, 3
CEUS of the intestine is also included in the latest guidelines.
4
The
manual perfusion analysis is performed by extracting and understanding the perfusion kinetics of the blood
in the tissue of interest from the acquired multi-frame data. An essential advantage of CEUS imaging is the
assessment of contrast enhancement with considerably higher temporal resolution compared to other perfusion
imaging techniques.
5
During 2D contrast enhanced US (CEUS) examinations studying perfusion, the sonographer normally holds
the probe still in a particular position and orientation to image a suitable slice of tissue of interest during CA
administration. However, the data acquired with this examination methodology often contains significant motion.
The reason is that patient movement through breathing and differently induced motion (intrinsic induced motion)
are present in addition to the motion caused by tilting or moving the US probe (extrinsic induced motion), since
US imaging is normally performed hand-held.
Further corresponding author information:
S. Scafer: E-mail: sebastian.schaefer@ovgu.de, Telephone: +49 391 67 11441

Figure 1: A representative frame of a CEUS acquisition of a patient with a stenosis of the small bowel due to
Crohn’s disease, showing b-mode data on the left and contrast data on the right hand side.
While this motion can normally be interpreted by well trained physicians,
6
in computer-assisted analysis the
different image frames of a time-dependent acquisition are required to be kept aligned in order to extract valid
perfusion parameters over time.
The acquisition usually produces two parallel image sequences: standard b-mode
and the measured CA
enhancement (Fig. 1). The frames are acquired alternatingly resulting in around 10 frames per second. The
CA only slightly effects the signal intensity in the b-mode data sequence which therefore has an approximately
constant brightness over time for a specific organ or tissue. This paper addresses motion compensation by image
registration of US b-mode frames by taking intestinal images as en example. Therefore, a statistically based
Markov Random Field (MRF) segmentation
7
of the b-mode sequence is used to produce feature images which
are less subject to noise influence over time. The markov property provides local interaction between labels and
compensates the influence of speckle and other artifacts. The generated feature images are then used to calculate
the registration transformation.
A particular restriction of 2D image acquisition is that the observed region may be missed during part of
the examination (out-of-plane motion), due to the three dimensional nature of the motion described above.
This is taken into account by defining temporal regions with groups of frames that show the same region of
interest (ROI).
8
Only frames within the same temporal region can be registered and thus can provide valid pixel
correspondences over time.
2. RELATED WORK
CEUS imaging has a time dependent component resulting in motion influence stemming from different sources. It
also comprises functional information in terms of blood perfusion leading to CA enhancement. As quantification
and visualization of functional signals require correct spatial correspondence of temporal samples the removal of
motion influence is necessary. As this is relatively new, the literature does not yet offer adapted methods for the
specific scenario of CEUS data registration.
However, intramodal registration in US images has been applied in other applications. Rohling et al.
9
pro-
posed the first application of automatic registration for US images. They combine multiple 3D US volumes
acquired from different angles to improve data quality through averaging. For accurate results a registration
brightness modulation: measured amplitudes are mapped onto intensity values

is required to align the volumes. Registration has been performed using correlation of 3D gradient informa-
tion which has been developed for MRI to CT registration originally, delivering improved segmentation and
visualization of the combined volume.
Shekhar et al.
10
used mutual information for the similarity calculation to register time-dependent 3D US
data of the left ventricle. Calculation of the mutual information is preceded by median filtering the image data,
dropping least significant bit information and using partial volume distribution interpolation when transforming
the moving image. This results in a smoother objective function and reduces the probability of the optimizer to
end up in a local maxima.
To calculate and analyze the deformation of the human heart Ledesma-Carbayo et al.
11
presented a combined
spatio-temporal registration procedure. Similarity of the deformed 2D US frames was measured by the mean
squared distance of all frames in the temporal sequence to a specified reference frame. Transformation parameters
for B-Splines are found for all frames simultaneously restricting the parameters to be continuously smooth over
time.
As opposed to the above cited papers Woo et al.
12
present a method where the calculation of similarity
is not directly extracted from intensity values. Instead, the local phase information
13
is used as a feature for
registration. The technique is suited to register consecutive frames and was tested on synthetic and cardiac US
images.
The b-mode sequence of 2D contrast enhanced US data, which can ideally be used to calculate registration,
is disturbed by many influences. US data has a low signal-to-noise ratio resulting in poor data quality and
artifacts. CA is detected with a different modulation, but it has a slight impact on b-mode measured intensity.
Moreover, 2D imaging depicts an image plane while displacement also occurs in 3D. This causes objects moving
out of the imaging plane leading to varying intensity levels at similar locations in the image over time. Although
we try to register frames with in-plane motion only, it is impossible to eliminate all 3D motion influence. The
afore mentioned approaches have not been designed to manage these influences.
The approach described in this paper is targeted to generate more stable features (refer to Fig. 3) over time
to overcome the above mentioned limitations. Therefore, segmentation-based label maps are generated. which
lead to registration of object boundaries, as the use of these label maps does not provide information within the
regions anymore.
3. METHOD
The registration procedure consists of four main steps (Fig. 2). The first step is a frame selection which groups
the frames in temporal regions and excludes frames from being regarded in registration. Frames within the same
temporal region are showing almost the same ROI ideally disturbed by motion in the image plane only. Exclusion
of frames is enforced if there are not enough in-plane motion frames to be registered. For each temporal region
the frame with highest average similarity to all region frames is defined as reference frame for registration. This
user-assisted technique is described in detail by Sch¨afer et al.
8
Frame selection
to remove
out-of-plane motion
MRF-labelmap
generation
used as feature map for
registration
Rigid registration
using translation and
rotation
Non-rigid
registration
using B-Splines
Figure 2: The four steps of the workflow: The frame selection being handled according to Schaefer et al.
8
and
the generation of the label map representing the key part of this work.
The second step comprises the generation of features pursuing the objective to provide stable information
over time of a specific location in the image. These features are derived by brightness values of the b-mode
sequence and shall be used to calculate the registration transformations. When b-mode data is used, brightness
values of the same location do not vary over time, but noise influence will hamper the matching process. The

(a) (b) (c) (d) (e) (f)
Figure 3: Two consecutive frames of the b-mode sequence (a, b), the corresponding label maps (d, e) and the
corresponding absolute differences both (c, f).
only reason this assumption might not hold is that the temporal regions defined by step one do not exclude all
parts with out-of-plane motion. Intensity-based segmentation generally will produce a stable value at specific
locations (see Fig. 5b). However, the result will still be influenced by noise and this influence is likely to be
different over time. Thus, we introduce a MRF smoothness prior
7
in a second step to increase the probability of
labels to have the same label as its neighbors. This smoothing of the initial labeling should enhance the temporal
robustness of the feature (compare Fig. 5c). It should be mentioned that the labeling is only intended for the
purpose of a feature for registration and does not represent structural semantics like tissue classes.
An initial labeling is obtained using predefined gray value classes (with mean and standard deviation) and
calculating the energy of each pixel. To obtain predefined classes for a dataset the user has to specify different
areas each exhibiting a particular average brightness µ and brightness variation σ. It has been proven that
between 2 and 4 areas are suitable. The initial label map is created by determining the site energy E
site
for each
pixel in the image w.r.t. to all labels. The one yielding the lowest energy is taken as initial label:
7
E
site
(i, L) = ln
2πσ
L
+
(i µ
L
)
2
2σ
L
, (1)
where i is the current site and L the current label class. Subsequently, the iterative conditional modes (ICM)
method
14
is used to incorporate the markovian property to ensure the labels are dependent on their neighboring
label values,
7
resulting in a MRF formulation.
The local energy E
local
for each pixel is calculated for each of the labels and the label with the lowest energy
is taken as a new label at this site:
E
local
(i, L) = E
site
(i, L) +
X
{i,j}∈C
i
βγ(L
i
, L
j
) (2)
where C
i
is the set of all neighbors paired with i (Moore-neighborhood is used). β is a control parameter defining
the influence of the MRF-prior and thus the homogeneity of regions. γ(·, ·) returns -1 if its parameters (label
values) are equal and 1 else.
After each iteration the global energy, consisting of the sum of E
local
for all sites in the image, is calculated.
The ICM terminates if convergence of the global energy is reached.
Calculation of the label map is designed to generate a more robust guidance of the registration. In Fig. 3 two
consecutive images with very little motion influence and the corresponding label maps are shown. The difference
images indicate a higher accordance using the label map and a lower risk of arbitrary dissimilarity due to noise.
Step three and four of our registration system consist of the registration calculation itself, incorporating rigid
transformations (step three) and consecutive B-Spline-based transformations (step four). The goal is to remove
coarse motion influence, mainly caused by extrinsic motion influence and patient breathing, with translation and

(a) (b) (c)
(d) (e) (f)
Figure 4: (a) label map of the b-mode frame depicted in (b), where the area of interest for similarity calculation
is shown (outlined in white). (d) shows anatomical annotations of the current frame. (e) is the corresponding
contrast frame. (c) and (f) are the transformed b-mode and contrast frames after registration to a fixed image
frame.
rotation (rigid transformations) and the remaining non-linear motion with B-Spline controlled transformations
with a 8 × 8 point grid (Fig. 4c, 4f). In both cases registration is performed using the label map of the b-mode
sequence within the temporal regions defined in step one. I.e. each frame of a temporal region is registered to
a fixed image frame. The fixed image frame is the frame of the respective temporal region with highest average
similarity to all other frames and is determined by the method described in Schaefer et al.
8
The difference between the label map of the fixed image and the moving image is measured by correlation.
Rigid registration is performed in a multi-resolution setup with increasing accuracy for higher resolution steps.
To optimize the 64 point locations (128 parameters) for the B-Splines a bounded limited-memory Broyden-
Fletcher-Goldfarb-Shanno algorithm is used, whereas we constrain each point to only move within half of the
spacing of the grid points, to disallow degeneration of the grid. The transformation parameters of each frame
registration are initialized with the final transformation parameters from the preceding frame for reasons of
stability and efficiency. After the registration has terminated, the final transformations are also applied to the
contrast sequence.

Figures (6)
Citations
More filters

Journal ArticleDOI
TL;DR: Different aspects of US in gastroenterology are presented, with a special emphasis on the contribution from Nordic scientists in developing clinical applications.
Abstract: Ultrasonography (US) is a safe and available real-time, high-resolution imaging method, which during the last decades has been increasingly integrated as a clinical tool in gastroenterology. New US applications have emerged with enforced data software and new technical solutions, including strain evaluation, three-dimensional imaging and use of ultrasound contrast agents. Specific gastroenterologic applications have been developed by combining US with other diagnostic or therapeutic methods, such as endoscopy, manometry, puncture needles, diathermy and stents. US provides detailed structural information about visceral organs without hazard to the patients and can play an important clinical role by reducing the need for invasive procedures. This paper presents different aspects of US in gastroenterology, with a special emphasis on the contribution from Nordic scientists in developing clinical applications.

8 citations


Proceedings ArticleDOI
01 Jan 2012
TL;DR: An approach to account for non-linear motion using a markov random field (MRF) based optimization scheme for registration is presented and it is shown that the method is suited to include prior knowledge about the data as the MRF system is able to model dependencies between the parameters of the optimization process.
Abstract: Ultrasound perfusion imaging is a rapid and inexpensive technique which enables observation of a dynamic process with high temporal resolution. The image acquisition is disturbed by various motion influences due to the acquisition procedure and patient motion. To extract valid information about perfusion for quantification and diagnostic purposes this influence must be compensated. In this work an approach to account for non-linear motion using a markov random field (MRF) based optimization scheme for registration is presented. Optimal transformation parameters are found all at once in a single optimization framework. Spatial and temporal constraints ensure continuity of a displacement field which is used for image transformation. Simulated datasets with known transformation fields are used to evaluate the presented method and demonstrate the potential of the system. Experiments with patient datasets show that superior results could be achieved compared to a pairwise image registration approach. Furthermore, it is shown that the method is suited to include prior knowledge about the data as the MRF system is able to model dependencies between the parameters of the optimization process.

3 citations


Proceedings ArticleDOI
29 May 2014
TL;DR: A new solution that combines filtering and tracking is proposed, in the theoretical framework of robust estimation and the mean shift, to cope with the gradual appearance changes of the tracked lesions and a new dynamic scale selection method is proposed.
Abstract: Parameters extracted from time activity curves in contrast enhanced ultrasound images play an important role in independent or computer aided diagnosis of hepatic focal lesions. Due to noise and errors induced by movement of ultrasound probe and breathing of patients, reproducible extraction time activity curve parameters is challenging. In this paper we propose a new solution that combines filtering and tracking, in the theoretical framework of robust estimation and the mean shift. To cope with the gradual apparence changes of the tracked lesions, we propose a new dynamic scale selection method.

References
More filters

Journal ArticleDOI
Julian Besag1
Abstract: may 7th, 1986, Professor A. F. M. Smith in the Chair] SUMMARY A continuous two-dimensional region is partitioned into a fine rectangular array of sites or "pixels", each pixel having a particular "colour" belonging to a prescribed finite set. The true colouring of the region is unknown but, associated with each pixel, there is a possibly multivariate record which conveys imperfect information about its colour according to a known statistical model. The aim is to reconstruct the true scene, with the additional knowledge that pixels close together tend to have the same or similar colours. In this paper, it is assumed that the local characteristics of the true scene can be represented by a nondegenerate Markov random field. Such information can be combined with the records by Bayes' theorem and the true scene can be estimated according to standard criteria. However, the computational burden is enormous and the reconstruction may reflect undesirable largescale properties of the random field. Thus, a simple, iterative method of reconstruction is proposed, which does not depend on these large-scale characteristics. The method is illustrated by computer simulations in which the original scene is not directly related to the assumed random field. Some complications, including parameter estimation, are discussed. Potential applications are mentioned briefly.

4,401 citations


Journal ArticleDOI
TL;DR: A look at progress in the field over the last 20 years is looked at and some of the challenges that remain for the years to come are suggested.
Abstract: The analysis of medical images has been woven into the fabric of the pattern analysis and machine intelligence (PAMI) community since the earliest days of these Transactions. Initially, the efforts in this area were seen as applying pattern analysis and computer vision techniques to another interesting dataset. However, over the last two to three decades, the unique nature of the problems presented within this area of study have led to the development of a new discipline in its own right. Examples of these include: the types of image information that are acquired, the fully three-dimensional image data, the nonrigid nature of object motion and deformation, and the statistical variation of both the underlying normal and abnormal ground truth. In this paper, we look at progress in the field over the last 20 years and suggest some of the challenges that remain for the years to come.

4,230 citations


Journal ArticleDOI
TL;DR: Authors F. Piscaglia, C. Nolsøe, M. M. Gilja, and H. P. Weskott review the manuscript and suggest ways in which the manuscript could have been improved.
Abstract: Authors F. Piscaglia1, C. Nolsøe2, C. F. Dietrich3, D. O. Cosgrove4, O. H. Gilja5, M. Bachmann Nielsen6, T. Albrecht7, L. Barozzi8, M. Bertolotto9, O. Catalano10, M. Claudon11, D. A. Clevert12, J. M. Correas13, M. D’Onofrio14, F. M. Drudi15, J. Eyding16, M. Giovannini17, M. Hocke18, A. Ignee19, E. M. Jung20, A. S. Klauser21, N. Lassau22, E. Leen23, G. Mathis24, A. Saftoiu25, G. Seidel26, P. S. Sidhu27, G. ter. Haar28, D. Timmerman29, H. P. Weskott30

905 citations


"Motion compensation of ultrasonic p..." refers methods in this paper

  • ...CEUS of the intestine is also included in the latest guidelines.(4) The manual perfusion analysis is performed by extracting and understanding the perfusion kinetics of the blood in the tissue of interest from the acquired multi-frame data....

    [...]


Journal ArticleDOI
01 Feb 2008
TL;DR: EFSUMB study group M. Claudon, D. Cosgrove, T. Tranquart, L. Thorelius, and H. Whittingham study group L. de.
Abstract: EFSUMB study group M. Claudon1, D. Cosgrove2, T. Albrecht3, L. Bolondi4, M. Bosio5, F. Calliada6, J.-M. Correas7, K. Darge8, C. Dietrich9, M. D'On ofrio10, D. H. Evans11, C. Filice12, L. Greiner13, K. Jäger14, N. de. Jong15, E. Leen16, R. Lencioni17, D. Lindsell18, A. Martegani19, S. Meairs20, C. Nolsøe21, F. Piscaglia22, P. Ricci23, G. Seidel24, B. Skjoldbye25, L. Solbiati26, L. Thorelius27, F. Tranquart28, H. P. Weskott29, T. Whittingham30

727 citations


"Motion compensation of ultrasonic p..." refers methods in this paper

  • ...An essential advantage of CEUS imaging is the assessment of contrast enhancement with considerably higher temporal resolution compared to other perfusion imaging techniques.(5)...

    [...]


Journal ArticleDOI
TL;DR: A new spatio-temporal elastic registration algorithm for motion reconstruction from a series of images to estimate displacement fields from two-dimensional ultrasound sequences of the heart, which uses a multiresolution optimization strategy to obtain a higher speed and robustness.
Abstract: We propose a new spatio-temporal elastic registration algorithm for motion reconstruction from a series of images. The specific application is to estimate displacement fields from two-dimensional ultrasound sequences of the heart. The basic idea is to find a spatio-temporal deformation field that effectively compensates for the motion by minimizing a difference with respect to a reference frame. The key feature of our method is the use of a semi-local spatio-temporal parametric model for the deformation using splines, and the reformulation of the registration task as a global optimization problem. The scale of the spline model controls the smoothness of the displacement field. Our algorithm uses a multiresolution optimization strategy to obtain a higher speed and robustness. We evaluated the accuracy of our algorithm using a synthetic sequence generated with an ultrasound simulation package, together with a realistic cardiac motion model. We compared our new global multiframe approach with a previous method based on pairwise registration of consecutive frames to demonstrate the benefits of introducing temporal consistency. Finally, we applied the algorithm to the regional analysis of the left ventricle. Displacement and strain parameters were evaluated showing significant differences between the normal and pathological segments, thereby illustrating the clinical applicability of our method.

332 citations


"Motion compensation of ultrasonic p..." refers methods in this paper

  • ...To calculate and analyze the deformation of the human heart Ledesma-Carbayo et al.(11) presented a combined spatio-temporal registration procedure....

    [...]

  • ...To calculate and analyze the deformation of the human heart Ledesma-Carbayo et al.11 presented a combined spatio-temporal registration procedure....

    [...]


Frequently Asked Questions (2)
Q1. What are the contributions in "Motion compensation of ultrasonic perfusion images" ?

The authors present an approach to register an ultrasonography sequence by using a feature label map. The MRF reduces the chance that outliers are represented in the label map and provides stable feature labels over the time frames. 

As future work the authors plan to update the label map at each evaluation step of transformation parameters. Within this context it should be considered if the calculation of the transformation parameters can be performed simultaneously for all time steps11 or within a reasonable temporal area so that temporal constraints can be incorporated to achieve smooth transitions over time.