
HAL Id: inserm-01349557
https://www.hal.inserm.fr/inserm-01349557
Submitted on 10 Nov 2016
HAL is a multi-disciplinary open access
archive for the deposit and dissemination of sci-
entic research documents, whether they are pub-
lished or not. The documents may come from
teaching and research institutions in France or
abroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, est
destinée au dépôt et à la diusion de documents
scientiques de niveau recherche, publiés ou non,
émanant des établissements d’enseignement et de
recherche français ou étrangers, des laboratoires
publics ou privés.
A Bayesian Model to Assess T2 Values and Their
Changes Over Time in Quantitative MRI
Benoit Combès, Anne Kerbrat, Olivier Commowick, Christian Barillot
To cite this version:
Benoit Combès, Anne Kerbrat, Olivier Commowick, Christian Barillot. A Bayesian Model to Assess
T2 Values and Their Changes Over Time in Quantitative MRI. 19th International Conference on
Medical Image Computing and Computer Assisted Intervention (MICCAI), Oct 2016, Athens, Greece.
pp.570 - 578, �10.1007/978-3-319-46726-9_66�. �inserm-01349557�

A Bayesian Model to Assess T
2
Values and their
Changes Over Time in Quantitative MRI
Benoit Comb`es
1
, Anne Kerbrat
2
, Olivier Commowick
1
, Christian Barillot
1
1
INRIA, INSERM, VisAGeS U746 Unit/Project, F-35042 Rennes, France
2
Service de Neurologie, Rennes, France
Abstract. Quantifying T
2
and T
∗
2
relaxation times from MRI becomes
a standard tool to assess modifications of biological tissues over time
or differences between populations. However, due to the relationship be-
tween the relaxation time and the associated MR signals such an analysis
is subject to error. In this work, we provide a Bayesian analysis of this re-
lationship. More specifically, we build posterior distributions relating the
raw (spin or gradient echo) acquisitions and the relaxation time and its
modifications over acquisitions. Such an analysis has three main merits.
First, it allows to build hierarchical models including prior information
and regularisations over voxels. Second, it provides many estimators of
the parameters distribution including the mean and the α-credible inter-
vals. Finally, as credible intervals are available, testing properly whether
the relaxation time (or its modification) lies within a certain range with
a given credible level is simple. We show the interest of this approach on
synthetic datasets and on two real applications in multiple sclerosis.
1 Introduction
Relaxometry imaging provides a way to quantify modifications of biological tis-
sues over time or differences between different populations. In this context, the
problem of estimating T
2
values from echo train acquisitions is discussed in many
works [10, 12, 11]. Since we deal with quantitative values, being able to then de-
tect and assess significant differences and changes seems an important goal to
achieve. However, to our knowledge, there is still a lack of statistical method
to analyse such data. In this work, we focus on the analysis of the T
2
or T
∗
2
modification between two time-points (e.g. baseline versus 3 months later or pre
versus post contrast agent injection) for a given subject. A naive approach to
perform such a task consists in first computing the T
2
maps for the pre and
post acquisitions using an optimisation algorithm and then in comparing the
variation level inside a region of interest -typically multiple sclerosis lesions- to
the variation inside the normal appearing white matter (NAWM). However, this
solution may drive to important issues. The reproducibility error of T
2
and T
∗
2
maps is indeed significantly smaller in the NAWM than in regions with higher
intensities. This makes the task, in the best case, complex and, in the worst,
error prone with many false positive detections in the higher intensities regions.

In fact, due to the form of the relationship relating the MR signal and the
relaxation time, the uncertainty of estimation increases with the relaxation time
(see [7] for illustrating experiments on phantoms). In this work, we provide a
Bayesian analysis of this relationship. More specifically, we build posterior distri-
butions relating the raw (spin or gradient echo) acquisitions and the relaxation
time and its modification over time. These posterior distributions extract the
relevant information from the data and provide complete and coherent charac-
terisations of the parameters distribution. Our approach has three main advan-
tages over the existing T
2
and T
∗
2
estimation methods. First, it allows to build
complex models including prior belief on parameters or regularisations over vox-
els. Second, it provides many estimators of the parameters distribution including
the mean and α-credible highest posterior density (HPD) intervals. Finally, once
the credible intervals estimated, testing properly whether the relaxation time (or
its modification) lies to a certain range given a credible level becomes simple.
The article is organized as follows. In Section 2, we describe a set of models
to analyse the T
2
and T
∗
2
relaxation times. More specifically, in Section 2.1, we
give a posterior for the T
2
(or T
∗
2
) estimation. In Section 2.2, we give a procedure
to assess differences of T
2
in a voxel between two time points at a given credible
level. Then, in Section 2.3, we slightly modify the posterior so that the estimation
is not anymore performed voxel-wise but region-wise leading to non-independent
multivariate estimations and testings. In Section 2.4, we propose a prior to use
the extended phase graph function instead of the exponential decay function used
in the previous models. Then, in Section 3, we assess our method on synthetic
data. In Section 4, we provide two examples of applications on Multiple Sclerosis
data. Finally, in Section 5, we discuss this work and give perspectives.
2 Models
2.1 Bayesian analysis of T
2
relaxometry
For a given voxel in a volume, the MR signal S
i
for a given echo time τ
i
can be
related to the two (unknown) characteristics of the observed tissue T
2
and M
(where M accounts for a combination of several physical components) through:
S
i
|T
2
= t
2
, M = m, σ = σ ∼ N(f
t
2
,m
(τ
i
), σ
2
), (1)
where f
t
2
,m
(τ
i
) = m · exp
−
τ
i
t
2
and N (µ, σ
2
) is the normal distribution with
mean µ and variance σ
2
. The Gaussian error term allows to account for mea-
surement noise as well as for model inadequacy (due to e.g. multi exponential
decay of the true signal, partial volumes or misalignment). Then we consider
that for all i 6= j (S
i
|t
2
, m, σ) ⊥ (S
j
|t
2
, m, σ) (⊥ standing for independence).
The associated reference prior [1] for σ and (M, T
2
) in different groups writes:
Π
1
(t
2
, m, σ) = Π
T
2
(t
2
) · Π
M
(m) · Π
σ
(σ) (2)
∝
l
0
(t
2
) · l
2
(t
2
) − l
2
1
(t
2
)
t
2
2
·1
[t
2min
,t
2max
]
(t
2
)
·
m·1
[m
min
,m
max
]
(m)
·
1
σ
1
IR
+
(σ)
,

where l
k
(t
2
) =
P
i
τ
k
i
exp(−2
τ
i
t
2
) for k = 0, 1, 2 and where 1
A
(x) = 1 if x ∈ A
and 0 elsewhere. Notice that the upper limit for M and positive lower limit for
T
2
in the prior support are needed to ensure posterior properness. This prior
leads to invariance of the inference under reparametrisation of the exponential
decay function. Moreover, as it will be shown in Section 2.1, it provides satisfying
performance whatever the actual values of T
2
(so, under normal as pathological
conditions). Estimators for the resulting marginal posterior p(T
2
|(s
i
)) can then
be computed using a Markov Chain Monte Carlo algorithm (details in Section
3).
2.2 Bayesian analysis of T
2
modification
We are now concerned with the following question : how to assess that the T
2
value associated to a given voxel has changed between two acquisitions with a
given minimal credible level. Let call X
a
the random variable associated to a
quantity for the pre acquisitions and X
b
for the post acquisitions. We assume
that the volumes are aligned and model for the pre acquisition:
S
a,i
|t
2
, m
a
, σ
a
∼ N(f
t
2
,m
a
(τ
i
), σ
2
a
), (3)
and introduce C as the T
2
modification between the two acquisitions through:
S
b,i
|t
2
, c, m
b
, σ
b
∼ N(f
t
2
+c,m
b
(τ
i
), σ
2
b
), (4)
where (additionally to above independences) for all i, j (S
b,i
|t
2
, c, m
b
, σ
b
) ⊥
(S
a,j
|t
2
, m
a
, σ
a
). From Eq. 2 we can define the prior:
Π
2
(c, t
2
, m
a
, m
b
, σ
a
, σ
b
) ∝ Π
T
2
(t
2
+ c)Π
T
2
(t
2
)Π
M
(m
a
)Π
M
(m
b
)Π
σ
(σ
a
)Π
σ
(σ
b
),
(5)
that defines, with Eq. 3 and 4, the marginal posterior for (among others) the T
2
modification p(C|(s
a,i
), (s
b,i
)). Then a voxel can be defined as negatively (resp.
positively) altered at α level, if the α-credible HPD interval for C does not
contain any positive (resp. negative) value (see [8] for a testing perspective).
The previous model of variation T
2
b
= T
2
a
+C was dedicated to T
2
modifica-
tion. Another important alternative model of variation states that when adding
a contrast agent to a biological tissue the effect on its T
2
property is additive
with the rate 1/T
2
: 1/T
2
b
= 1/T
2
a
+ C
R
and that C
R
(we use this notation
to distinguish it from the above C) is proportional to the contrast agent con-
centration. From the posterior of T
2
and C designed above, its posterior writes:
p(C
R
= c
R
|(s
a,i
), (s
b,i
)) = p(
−C
T
2
(C+T
2
)
= c
R
|(s
a,i
), (s
b,i
)).
2.3 Region-wise analysis
The models proposed previously allow a voxel-wise analysis where each voxel
is processed independently from others. Performing a grouped inference for all
the voxels of a given region (e.g. lesion) can be performed by adding a sup-
plemental layer to the model. Let us use j to index voxels, then one can re-
place each prior Π
2
(c
j
) by for example: Π
2
(c
j
|µ
C
, σ
C
) = ψ
N
(µ
C
− c
j
, σ
2
C
)

(ψ
N
being the Normal kernel). We then assume that ∀ i
1
, i
2
and j
1
6= j
2
,
(S
j
1
i
1
|t
2
j
1
, m
j
1
, σ
j
1
) ⊥ (S
j
2
i
2
|t
2
j
2
, m
j
2
, σ
j
2
) (in particular, we consider the errors
as independent between voxels). For the two hyperparameters µ
C
and σ
C
, we
use the weakly informative priors (see [5] for details) µ
C
∼ N(0, 10
6
) (approx-
imating the uniform density over IR) and σ
C
∼ Cauchy(x
0
= 0, γ = 100)I
IR
+
(allowing σ
C
to go well below 400ms), where I
IR
+
denotes a left-truncation. Such
a model allows the set of inferences over the (C
j
) to be performed not indepen-
dently thus dealing in a natural way with multiple comparisons by shrinking
exceptional C
j
toward its estimated region-wise distribution [6] and improving
the overall inference. Depending on the expected regularity of the C
j
s within the
regions, we can alternatively opt for a Cauchy density or/and add an informative
prior for the error variances σ
2
a,i
and σ
2
b,i
to favor goodness of the fit. Parallelly,
a spatial Markovian regularisation can also be considered.
2.4 Using the extended phase graph
When the signals (s
i
)
i=1:N
are obtained using sequences of multiple echoes spin
echoes (e.g. CMPG sequence), the exponential decay function is only a rough
approximation of the relation between T
2
and the MR signals. Some solutions
to adapt it to this situation exist [10] and could be easily added to our model.
In the following, we propose a broader solution that consists in replacing the
exponential decay by the Extended Phase Graph function (EPG) [9] that relates
the signal S
i
to two other quantities (additionally to T
2
and M) so that (S
i
) =
EP G(T
2
, M, T
1
, B
1
, (τ
i
)) + (T
1
being the spin-lattice relaxation time, B
1
the
field inhomogeneity i.e. multiplicative departure from the nominal flip angle and
representing the noise term of Eq. 1). This function is complicated (product of
N 3×3 matrices involving non-linearly the different parameters) and derivating
a dedicated reference prior would be cumbersome. Nevertheless, it consists of
small departures from the exponential function and mainly depends on M and
T
2
. Thus a reasonable choice consists in using the same priors as those derivated
for the exponential function. Then, an informative prior is designed for B
1
. In
practice, B
1
typically takes 95% of its values in [0.4, 1.6] (we deliberatively use
this symmetric form to avoid B
1
= 1 to be a boundary of the prior support) so we
set B
1
∼ Gamma(k = 8.5, θ = 0.1). We did the same for T
1
by choosing a gamma
distribution with 95% of its density in [20, 2000] (T
1
∼ Gamma(2.3, 120)). In
practice, T
1
has a very small impact on the EPG and on the inference. Then the
EPG model can be used by simply replacing f
t
2
,m
(τ
i
) by EP G(t
2
, m, t
1
, b
1
, (τ
i
))
(prior sensitivity analysis for T
1
and B
1
give satisfying result).
3 Results on synthetic data
3.1 Implementation, datasets and convergence diagnosis
For the sake of concision, for the previous models, we only exhibited the like-
lihoods, the priors and the independence assumptions. The resulting posteriors