scispace - formally typeset
Search or ask a question
Book ChapterDOI

A Bayesian Model to Assess T_2 Values and Their Changes Over Time in Quantitative MRI

TL;DR: This work builds posterior distributions relating the raw (spin or gradient echo) acquisitions and the relaxation time and its modifications over acquisitions and provides many estimators of the parameters distribution including the mean and the \(\alpha \)-credible intervals.
Abstract: Quantifying \(T_2\) and \(T_2^*\) relaxation times from MRI becomes a standard tool to assess modifications of biological tissues over time or differences between populations. However, due to the relationship between the relaxation time and the associated MR signals such an analysis is subject to error. In this work, we provide a Bayesian analysis of this relationship. More specifically, we build posterior distributions relating the raw (spin or gradient echo) acquisitions and the relaxation time and its modifications over acquisitions. Such an analysis has three main merits. First, it allows to build hierarchical models including prior information and regularisations over voxels. Second, it provides many estimators of the parameters distribution including the mean and the \(\alpha \)-credible intervals. Finally, as credible intervals are available, testing properly whether the relaxation time (or its modification) lies within a certain range with a given credible level is simple. We show the interest of this approach on synthetic datasets and on two real applications in multiple sclerosis.

Summary (3 min read)

1 Introduction

  • Relaxometry imaging provides a way to quantify modifications of biological tissues over time or differences between different populations.
  • The problem of estimating T2 values from echo train acquisitions is discussed in many works [10, 12, 11].
  • This makes the task, in the best case, complex and, in the worst, error prone with many false positive detections in the higher intensities regions.
  • These posterior distributions extract the relevant information from the data and provide complete and coherent characterisations of the parameters distribution.
  • In Section 2.4, the authors propose a prior to use the extended phase graph function instead of the exponential decay function used in the previous models.

2.1 Bayesian analysis of T2 relaxometry

  • The Gaussian error term allows to account for measurement noise as well as for model inadequacy (due to e.g. multi exponential decay of the true signal, partial volumes or misalignment).
  • Notice that the upper limit for M and positive lower limit for T2 in the prior support are needed to ensure posterior properness.
  • This prior leads to invariance of the inference under reparametrisation of the exponential decay function.
  • Estimators for the resulting marginal posterior p(T2|(si)) can then be computed using a Markov Chain Monte Carlo algorithm (details in Section 3).

2.2 Bayesian analysis of T2 modification

  • The authors are now concerned with the following question : how to assess that the T2 value associated to a given voxel has changed between two acquisitions with a given minimal credible level.
  • Xa the random variable associated to a quantity for the pre acquisitions and Xb for the post acquisitions.
  • Then a voxel can be defined as negatively (resp. positively) altered at α level, if the α-credible HPD interval for C does not contain any positive (resp. negative) value (see [8] for a testing perspective).

2.3 Region-wise analysis

  • The models proposed previously allow a voxel-wise analysis where each voxel is processed independently from others.
  • Performing a grouped inference for all the voxels of a given region (e.g. lesion) can be performed by adding a supplemental layer to the model.
  • Such a model allows the set of inferences over the (Cj) to be performed not independently thus dealing in a natural way with multiple comparisons by shrinking exceptional Cj toward its estimated region-wise distribution [6] and improving the overall inference.
  • Depending on the expected regularity of the Cjs within the regions, the authors can alternatively opt for a Cauchy density or/and add an informative prior for the error variances σ2a,i and σ 2 b,i to favor goodness of the fit.
  • Parallelly, a spatial Markovian regularisation can also be considered.

2.4 Using the extended phase graph

  • N are obtained using sequences of multiple echoes spin echoes (e.g. CMPG sequence), the exponential decay function is only a rough approximation of the relation between T2 and the MR signals, also known as When the signals (si)i=1.
  • Nevertheless, it consists of small departures from the exponential function and mainly depends on M and T2.
  • Thus a reasonable choice consists in using the same priors as those derivated for the exponential function.
  • Then the EPG model can be used by simply replacing ft2,m(τi) by EPG(t2,m, t1, b1, (τi)) (prior sensitivity analysis for T1 and B1 give satisfying result).

3.1 Implementation, datasets and convergence diagnosis

  • For the sake of concision, for the previous models, the authors only exhibited the likelihoods, the priors and the independence assumptions.
  • The resulting posteriors can be obtained using the Bayes rule.
  • The authors used 10k samples (40k for the region-wise model) after discarding the 5k first ones (20k for the region-wise model).
  • Convergence has been assessed using the Geweke score [2].
  • Because the function EPG(t2, t1, b1, (τi)) is time consuming and many calls are needed, it is tabulated on a fine grid for different values of t2, t1 and b1.

3.2 Results

  • The authors run 400 estimations for different configurations of T2 and σ (realistic values are randomly given to T1, B1 and M).
  • Moreover, these intervals exhibit excellent coverage properties, illustrating the interest of the derivated priors in the absence of prior information for T2.
  • For different configurations of T2, σ and C, the authors analyse the specificity (denoted p−) and sensitivity (denoted p+) of the estimator for C modification (Section 2.2) for α=0.05, also known as C estimation.
  • Results are summarized in Table 2 and illustrate that for the used acquisition protocol, detecting low T2 modifications in high values is limited with a 0.95 specificity level.
  • The authors also observe that the region-wise model leads to strong improvements of the performances.

4 Two applications on real data

  • The present method has been developed in a clinical context including a dataset of about 50 pre-post acquisitions from relapsing-remitting multiple sclerosis patients.
  • The sequence parameters are the same as those used in Section 3.
  • For each patient, all volumes are rigidly co-registered using a robust blockmatching method [3].

4.1 Assessing USPIO enhancement in MS lesion

  • Ultra small superparamagnetic iron oxide is a contrast agent used to assess macrophagic activity that reduces the T2 relaxation of the surrounding tissues [4].
  • Figure 1 displays detection with a 5% level for a patient acquired twice without USPIO injection and for a patient with potentially active lesions.
  • The pre-post T2 relaxation maps illustrate the difficulty to label the lesions as enhanced or not by simply using the point estimates.
  • The map for the patient without USPIO injection, for which their method does not detect any voxels, highlights this point.
  • More generally, on this data the authors never detected more than 5% of the voxels.

4.2 Assessing T2 recovery in an active MS lesion

  • To illustrate the interest of the credible intervals, the authors display the minimal plausible C values for the 0.05 level i.e. the lower bound of the 0.05-credible interval for [m3,m0] (Fig 2.a) and the upper one for [m6,m3] (Fig 2.b).
  • These maps offer a quantitative scale to compare lesions recovery among and within patient.
  • More precisely, for lesion 3, the minimal admissible C is positive for [m3,m0] (thus the interval does not contain 0) demonstrating that the lesion is recovering during this period.
  • The authors also give the minimal USPIO concentration i.e. CR changes for the 5% level (Fig 2.c): they observe a good adequacy between USPIO concentration and T2 change at [m0,m3].

5 Discussion and Conclusion

  • Then, the authors showed the interesting properties of their models on synthetic datasets.
  • Finally, the authors exemplified the interest of the obtained credible intervals with two applications.
  • As illustrated, when available, interval estimates can be more powerful than point ones to address naturally complex issues in medical imaging.
  • The Bj1s are correlated in space which could be accounted for adding a supplemental layer to the model (similarly to what proposed in Section 2.3).
  • The method is computationally intensive (about 2s per voxel for the EPG region-wise approach) and for more demanding applications, other strategies e.g. maximising over nuisance parameters could be investigated.

Did you find this useful? Give us your feedback

Content maybe subject to copyright    Report

HAL Id: inserm-01349557
https://www.hal.inserm.fr/inserm-01349557
Submitted on 10 Nov 2016
HAL is a multi-disciplinary open access
archive for the deposit and dissemination of sci-
entic research documents, whether they are pub-
lished or not. The documents may come from
teaching and research institutions in France or
abroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, est
destinée au dépôt et à la diusion de documents
scientiques de niveau recherche, publiés ou non,
émanant des établissements d’enseignement et de
recherche français ou étrangers, des laboratoires
publics ou privés.
A Bayesian Model to Assess T2 Values and Their
Changes Over Time in Quantitative MRI
Benoit Combès, Anne Kerbrat, Olivier Commowick, Christian Barillot
To cite this version:
Benoit Combès, Anne Kerbrat, Olivier Commowick, Christian Barillot. A Bayesian Model to Assess
T2 Values and Their Changes Over Time in Quantitative MRI. 19th International Conference on
Medical Image Computing and Computer Assisted Intervention (MICCAI), Oct 2016, Athens, Greece.
pp.570 - 578, �10.1007/978-3-319-46726-9_66�. �inserm-01349557�

A Bayesian Model to Assess T
2
Values and their
Changes Over Time in Quantitative MRI
Benoit Comb`es
1
, Anne Kerbrat
2
, Olivier Commowick
1
, Christian Barillot
1
1
INRIA, INSERM, VisAGeS U746 Unit/Project, F-35042 Rennes, France
2
Service de Neurologie, Rennes, France
Abstract. Quantifying T
2
and T
2
relaxation times from MRI becomes
a standard tool to assess modifications of biological tissues over time
or differences between populations. However, due to the relationship be-
tween the relaxation time and the associated MR signals such an analysis
is subject to error. In this work, we provide a Bayesian analysis of this re-
lationship. More specifically, we build posterior distributions relating the
raw (spin or gradient echo) acquisitions and the relaxation time and its
modifications over acquisitions. Such an analysis has three main merits.
First, it allows to build hierarchical models including prior information
and regularisations over voxels. Second, it provides many estimators of
the parameters distribution including the mean and the α-credible inter-
vals. Finally, as credible intervals are available, testing properly whether
the relaxation time (or its modification) lies within a certain range with
a given credible level is simple. We show the interest of this approach on
synthetic datasets and on two real applications in multiple sclerosis.
1 Introduction
Relaxometry imaging provides a way to quantify modifications of biological tis-
sues over time or differences between different populations. In this context, the
problem of estimating T
2
values from echo train acquisitions is discussed in many
works [10, 12, 11]. Since we deal with quantitative values, being able to then de-
tect and assess significant differences and changes seems an important goal to
achieve. However, to our knowledge, there is still a lack of statistical method
to analyse such data. In this work, we focus on the analysis of the T
2
or T
2
modification between two time-points (e.g. baseline versus 3 months later or pre
versus post contrast agent injection) for a given subject. A naive approach to
perform such a task consists in first computing the T
2
maps for the pre and
post acquisitions using an optimisation algorithm and then in comparing the
variation level inside a region of interest -typically multiple sclerosis lesions- to
the variation inside the normal appearing white matter (NAWM). However, this
solution may drive to important issues. The reproducibility error of T
2
and T
2
maps is indeed significantly smaller in the NAWM than in regions with higher
intensities. This makes the task, in the best case, complex and, in the worst,
error prone with many false positive detections in the higher intensities regions.

In fact, due to the form of the relationship relating the MR signal and the
relaxation time, the uncertainty of estimation increases with the relaxation time
(see [7] for illustrating experiments on phantoms). In this work, we provide a
Bayesian analysis of this relationship. More specifically, we build posterior distri-
butions relating the raw (spin or gradient echo) acquisitions and the relaxation
time and its modification over time. These posterior distributions extract the
relevant information from the data and provide complete and coherent charac-
terisations of the parameters distribution. Our approach has three main advan-
tages over the existing T
2
and T
2
estimation methods. First, it allows to build
complex models including prior belief on parameters or regularisations over vox-
els. Second, it provides many estimators of the parameters distribution including
the mean and α-credible highest posterior density (HPD) intervals. Finally, once
the credible intervals estimated, testing properly whether the relaxation time (or
its modification) lies to a certain range given a credible level becomes simple.
The article is organized as follows. In Section 2, we describe a set of models
to analyse the T
2
and T
2
relaxation times. More specifically, in Section 2.1, we
give a posterior for the T
2
(or T
2
) estimation. In Section 2.2, we give a procedure
to assess differences of T
2
in a voxel between two time points at a given credible
level. Then, in Section 2.3, we slightly modify the posterior so that the estimation
is not anymore performed voxel-wise but region-wise leading to non-independent
multivariate estimations and testings. In Section 2.4, we propose a prior to use
the extended phase graph function instead of the exponential decay function used
in the previous models. Then, in Section 3, we assess our method on synthetic
data. In Section 4, we provide two examples of applications on Multiple Sclerosis
data. Finally, in Section 5, we discuss this work and give perspectives.
2 Models
2.1 Bayesian analysis of T
2
relaxometry
For a given voxel in a volume, the MR signal S
i
for a given echo time τ
i
can be
related to the two (unknown) characteristics of the observed tissue T
2
and M
(where M accounts for a combination of several physical components) through:
S
i
|T
2
= t
2
, M = m, σ = σ N(f
t
2
,m
(τ
i
), σ
2
), (1)
where f
t
2
,m
(τ
i
) = m · exp
τ
i
t
2
and N (µ, σ
2
) is the normal distribution with
mean µ and variance σ
2
. The Gaussian error term allows to account for mea-
surement noise as well as for model inadequacy (due to e.g. multi exponential
decay of the true signal, partial volumes or misalignment). Then we consider
that for all i 6= j (S
i
|t
2
, m, σ) (S
j
|t
2
, m, σ) ( standing for independence).
The associated reference prior [1] for σ and (M, T
2
) in different groups writes:
Π
1
(t
2
, m, σ) = Π
T
2
(t
2
) · Π
M
(m) · Π
σ
(σ) (2)
l
0
(t
2
) · l
2
(t
2
) l
2
1
(t
2
)
t
2
2
·1
[t
2min
,t
2max
]
(t
2
)
·
m·1
[m
min
,m
max
]
(m)
·
1
σ
1
IR
+
(σ)
,

where l
k
(t
2
) =
P
i
τ
k
i
exp(2
τ
i
t
2
) for k = 0, 1, 2 and where 1
A
(x) = 1 if x A
and 0 elsewhere. Notice that the upper limit for M and positive lower limit for
T
2
in the prior support are needed to ensure posterior properness. This prior
leads to invariance of the inference under reparametrisation of the exponential
decay function. Moreover, as it will be shown in Section 2.1, it provides satisfying
performance whatever the actual values of T
2
(so, under normal as pathological
conditions). Estimators for the resulting marginal posterior p(T
2
|(s
i
)) can then
be computed using a Markov Chain Monte Carlo algorithm (details in Section
3).
2.2 Bayesian analysis of T
2
modification
We are now concerned with the following question : how to assess that the T
2
value associated to a given voxel has changed between two acquisitions with a
given minimal credible level. Let call X
a
the random variable associated to a
quantity for the pre acquisitions and X
b
for the post acquisitions. We assume
that the volumes are aligned and model for the pre acquisition:
S
a,i
|t
2
, m
a
, σ
a
N(f
t
2
,m
a
(τ
i
), σ
2
a
), (3)
and introduce C as the T
2
modification between the two acquisitions through:
S
b,i
|t
2
, c, m
b
, σ
b
N(f
t
2
+c,m
b
(τ
i
), σ
2
b
), (4)
where (additionally to above independences) for all i, j (S
b,i
|t
2
, c, m
b
, σ
b
)
(S
a,j
|t
2
, m
a
, σ
a
). From Eq. 2 we can define the prior:
Π
2
(c, t
2
, m
a
, m
b
, σ
a
, σ
b
) Π
T
2
(t
2
+ c)Π
T
2
(t
2
)Π
M
(m
a
)Π
M
(m
b
)Π
σ
(σ
a
)Π
σ
(σ
b
),
(5)
that defines, with Eq. 3 and 4, the marginal posterior for (among others) the T
2
modification p(C|(s
a,i
), (s
b,i
)). Then a voxel can be defined as negatively (resp.
positively) altered at α level, if the α-credible HPD interval for C does not
contain any positive (resp. negative) value (see [8] for a testing perspective).
The previous model of variation T
2
b
= T
2
a
+C was dedicated to T
2
modifica-
tion. Another important alternative model of variation states that when adding
a contrast agent to a biological tissue the effect on its T
2
property is additive
with the rate 1/T
2
: 1/T
2
b
= 1/T
2
a
+ C
R
and that C
R
(we use this notation
to distinguish it from the above C) is proportional to the contrast agent con-
centration. From the posterior of T
2
and C designed above, its posterior writes:
p(C
R
= c
R
|(s
a,i
), (s
b,i
)) = p(
C
T
2
(C+T
2
)
= c
R
|(s
a,i
), (s
b,i
)).
2.3 Region-wise analysis
The models proposed previously allow a voxel-wise analysis where each voxel
is processed independently from others. Performing a grouped inference for all
the voxels of a given region (e.g. lesion) can be performed by adding a sup-
plemental layer to the model. Let us use j to index voxels, then one can re-
place each prior Π
2
(c
j
) by for example: Π
2
(c
j
|µ
C
, σ
C
) = ψ
N
(µ
C
c
j
, σ
2
C
)

(ψ
N
being the Normal kernel). We then assume that i
1
, i
2
and j
1
6= j
2
,
(S
j
1
i
1
|t
2
j
1
, m
j
1
, σ
j
1
) (S
j
2
i
2
|t
2
j
2
, m
j
2
, σ
j
2
) (in particular, we consider the errors
as independent between voxels). For the two hyperparameters µ
C
and σ
C
, we
use the weakly informative priors (see [5] for details) µ
C
N(0, 10
6
) (approx-
imating the uniform density over IR) and σ
C
Cauchy(x
0
= 0, γ = 100)I
IR
+
(allowing σ
C
to go well below 400ms), where I
IR
+
denotes a left-truncation. Such
a model allows the set of inferences over the (C
j
) to be performed not indepen-
dently thus dealing in a natural way with multiple comparisons by shrinking
exceptional C
j
toward its estimated region-wise distribution [6] and improving
the overall inference. Depending on the expected regularity of the C
j
s within the
regions, we can alternatively opt for a Cauchy density or/and add an informative
prior for the error variances σ
2
a,i
and σ
2
b,i
to favor goodness of the fit. Parallelly,
a spatial Markovian regularisation can also be considered.
2.4 Using the extended phase graph
When the signals (s
i
)
i=1:N
are obtained using sequences of multiple echoes spin
echoes (e.g. CMPG sequence), the exponential decay function is only a rough
approximation of the relation between T
2
and the MR signals. Some solutions
to adapt it to this situation exist [10] and could be easily added to our model.
In the following, we propose a broader solution that consists in replacing the
exponential decay by the Extended Phase Graph function (EPG) [9] that relates
the signal S
i
to two other quantities (additionally to T
2
and M) so that (S
i
) =
EP G(T
2
, M, T
1
, B
1
, (τ
i
)) + (T
1
being the spin-lattice relaxation time, B
1
the
field inhomogeneity i.e. multiplicative departure from the nominal flip angle and
representing the noise term of Eq. 1). This function is complicated (product of
N 3×3 matrices involving non-linearly the different parameters) and derivating
a dedicated reference prior would be cumbersome. Nevertheless, it consists of
small departures from the exponential function and mainly depends on M and
T
2
. Thus a reasonable choice consists in using the same priors as those derivated
for the exponential function. Then, an informative prior is designed for B
1
. In
practice, B
1
typically takes 95% of its values in [0.4, 1.6] (we deliberatively use
this symmetric form to avoid B
1
= 1 to be a boundary of the prior support) so we
set B
1
Gamma(k = 8.5, θ = 0.1). We did the same for T
1
by choosing a gamma
distribution with 95% of its density in [20, 2000] (T
1
Gamma(2.3, 120)). In
practice, T
1
has a very small impact on the EPG and on the inference. Then the
EPG model can be used by simply replacing f
t
2
,m
(τ
i
) by EP G(t
2
, m, t
1
, b
1
, (τ
i
))
(prior sensitivity analysis for T
1
and B
1
give satisfying result).
3 Results on synthetic data
3.1 Implementation, datasets and convergence diagnosis
For the sake of concision, for the previous models, we only exhibited the like-
lihoods, the priors and the independence assumptions. The resulting posteriors

References
More filters
Journal ArticleDOI
TL;DR: The authors proposed a conservative prior distribution for variance components, which deliberately gives more weight to smaller values and is appropriate for investigators who are skeptical about the presence of variability in the second-stage parameters (random effects).
Abstract: Bayesian hierarchical models typically involve specifying prior distributions for one or more variance components. This is rather removed from the observed data, so specification based on expert knowledge can be difficult. While there are suggestions for "default" priors in the literature, often a condi tionally conjugate inverse-gamma specification is used, despite documented drawbacks of this choice. The authors suggest "conservative" prior distributions for variance components, which deliberately give more weight to smaller values. These are appropriate for investigators who are skeptical about the presence of variability in the second-stage parameters (random effects) and want to particularly guard against inferring more structure than is really present. The suggested priors readily adapt to various hierarchical modelling settings, such as fitting smooth curves, modelling spatial variation and combining data from multiple sites. Lois a priori conservatrices pour les parametres de variance de modeles hierarchiques Rgsum6: Les modeles bay6siens hierarchiques comportent g6n6ralement une ou des composantes de va riance que l'on doit doter de lois a priori. Le choix de ces lois est delicat car la variation est un aspect des donn6es difficile a cemer. De toutes les lois a priori "par defaut," une loi conjuguee inverse-gamma con ditionnelle est la plus souvent employ6e, malgr6 ses inconvenients. Les auteurs proposent des lois a priori "conservatrices" pour les composantes de la variance qui privilegient les petites valeurs. Elles conviennent bien aux situations oiu le chercheur s'interroge sur la presence r6elle de variabilit6 dans les parametres de deuxieme degre (effets aleatoires) et qu'il veut eviter d'imposer une structure artificielle. Les lois a priori sugg6rdes s'adaptent A diverses situations propices a la mod6lisation hierarchique, notamment l'ajustement de courbes lisses et la modelisation de variation spatiale ou de donn6es issues de nombreux sites.

1,184 citations


"A Bayesian Model to Assess T_2 Valu..." refers methods in this paper

  • ...For the two hyperparameters μC and σC , we use the weakly informative priors (see [5] for details) μC ∼ N(0, 10(6)) (approximating the uniform density over IR) and σC ∼ Cauchy(x0 = 0, γ = 100)IIR+ (allowing σC to go well below 400ms), where IIR+ denotes a left-truncation....

    [...]

Journal ArticleDOI
TL;DR: Computer simulations indicate that the use of adaptive MCMC algorithms to automatically tune the Markov chain parameters during a run perform very well compared to nonadaptive algorithms, even in high dimension.
Abstract: We investigate the use of adaptive MCMC algorithms to automatically tune the Markov chain parameters during a run. Examples include the Adaptive Metropolis (AM) multivariate algorithm of Haario, Saksman, and Tamminen (2001), Metropolis-within-Gibbs algorithms for nonconjugate hierarchical models, regionally adjusted Metropolis algorithms, and logarithmic scalings. Computer simulations indicate that the algorithms perform very well compared to nonadaptive algorithms, even in high dimension.

1,054 citations


"A Bayesian Model to Assess T_2 Valu..." refers methods in this paper

  • ...3), we get its statistics using the ”logarithm scaling” adaptive one variable at a time Metropolis-Hastings algorithm [13]....

    [...]

Journal ArticleDOI
TL;DR: In this article, the problem of multiple comparisons can disappear entirely when viewed from a hierarchical Bayesian perspective, and a multilevel model is proposed to address the multiple comparisons problem and also yield more efficient estimates.
Abstract: Applied researchers often find themselves making statistical inferences in settings that would seem to require multiple comparisons adjustments. We challenge the Type I error paradigm that underlies these corrections. Moreover we posit that the problem of multiple comparisons can disappear entirely when viewed from a hierarchical Bayesian perspective. We propose building multilevel models in the settings where multiple comparisons arise. Multilevel models perform partial pooling (shifting estimates toward each other), whereas classical procedures typically keep the centers of intervals stationary, adjusting for multiple comparisons by making the intervals wider (or, equivalently, adjusting the p values corresponding to intervals of fixed width). Thus, multilevel models address the multiple comparisons problem and also yield more efficient estimates, especially in settings with low group-level variation, which is where multiple comparisons are a particular concern.

1,043 citations

Journal ArticleDOI
TL;DR: Two different Bayesian approaches are explained and evaluated, one of which involves Bayesian model comparison (and uses Bayes factors) and the other assesses whether the null value falls among the most credible values.
Abstract: Psychologists have been trained to do data analysis by asking whether null values can be rejected. Is the difference between groups nonzero? Is choice accuracy not at chance level? These questions have been traditionally addressed by null hypothesis significance testing (NHST). NHST has deep problems that are solved by Bayesian data analysis. As psychologists transition to Bayesian data analysis, it is natural to ask how Bayesian analysis assesses null values. The article explains and evaluates two different Bayesian approaches. One method involves Bayesian model comparison (and uses Bayes factors). The second method involves Bayesian parameter estimation and assesses whether the null value falls among the most credible values. Which method to use depends on the specific question that the analyst wants to answer, but typically the estimation approach (not using Bayes factors) provides richer information than the model comparison approach.

356 citations


"A Bayesian Model to Assess T_2 Valu..." refers background in this paper

  • ...negative) value (see [8] for a testing perspective)....

    [...]

Journal ArticleDOI
TL;DR: It is shown how an explicit expression for the reference prior can be obtained under very weak regularity conditions and used to derive new reference priors both analytically and numerically.
Abstract: Reference analysis produces objective Bayesian inference, in the sense that inferential statements depend only on the assumed model and the available data, and the prior distribution used to make an inference is least informative in a certain information-theoretic sense. Reference priors have been rigorously defined in specific contexts and heuristically defined in general, but a rigorous general definition has been lacking. We produce a rigorous general definition here and then show how an explicit expression for the reference prior can be obtained under very weak regularity conditions. The explicit expression can be used to derive new reference priors both analytically and numerically.

344 citations


"A Bayesian Model to Assess T_2 Valu..." refers background in this paper

  • ...The associated reference prior [1] for σ and (M,T2) in different groups writes:...

    [...]

  • ...Notice that other choices of prior such as the reference prior with σ and M as nuisance parameters [1] do not lead us to analytical solutions....

    [...]