scispace - formally typeset
Open AccessJournal ArticleDOI

Confidence intervals for the overall effect size in random-effects meta-analysis.

Julio Sánchez-Meca, +1 more
- 01 Mar 2008 - 
- Vol. 13, Iss: 1, pp 31-48
TLDR
The performances of 3 alternatives to the standard CI procedure are examined under a random-effects model and 8 different tau2 estimators to estimate the weights: the t distribution CI, the weighted variance CI (with an improved variance), and the quantile approximation method (recently proposed).
Abstract
One of the main objectives in meta-analysis is to estimate the overall effect size by calculating a confidence interval (CI). The usual procedure consists of assuming a standard normal distribution and a sampling variance defined as the inverse of the sum of the estimated weights of the effect sizes. But this procedure does not take into account the uncertainty due to the fact that the heterogeneity variance (tau2) and the within-study variances have to be estimated, leading to CIs that are too narrow with the consequence that the actual coverage probability is smaller than the nominal confidence level. In this article, the performances of 3 alternatives to the standard CI procedure are examined under a random-effects model and 8 different tau2 estimators to estimate the weights: the t distribution CI, the weighted variance CI (with an improved variance), and the quantile approximation method (recently proposed). The results of a Monte Carlo simulation showed that the weighted variance CI outperformed the other methods regardless of the tau2 estimator, the value of tau2, the number of studies, and the sample size.

read more

Content maybe subject to copyright    Report

Confidence Intervals for the Overall Effect Size in
Random-Effects Meta-Analysis
Julio Sa´nchez-Meca and Fulgencio Marı´n-Martı´nez
University of Murcia
One of the main objectives in meta-analysis is to estimate the overall effect size by calculating
a confidence interval (CI). The usual procedure consists of assuming a standard normal
distribution and a sampling variance defined as the inverse of the sum of the estimated
weights of the effect sizes. But this procedure does not take into account the uncertainty due
to the fact that the heterogeneity variance (
2
) and the within-study variances have to be
estimated, leading to CIs that are too narrow with the consequence that the actual coverage
probability is smaller than the nominal confidence level. In this article, the performances of
3 alternatives to the standard CI procedure are examined under a random-effects model and
8 different
2
estimators to estimate the weights: the t distribution CI, the weighted variance
CI (with an improved variance), and the quantile approximation method (recently proposed).
The results of a Monte Carlo simulation showed that the weighted variance CI outperformed
the other methods regardless of the
2
estimator, the value of
2
, the number of studies, and
the sample size.
Keywords: meta-analysis, random-effects model, confidence intervals, heterogeneity vari-
ance, standardized mean difference
Meta-analysis is a research methodology that aims to
integrate, by applying statistical methods, the results of a set
of empirical studies about a given topic. To accomplish its
purpose, a meta-analysis requires a thorough search of the
relevant studies, and the results of each individual study
have to be translated into the same metric (Cooper, 1998;
Lipsey & Wilson, 2001). Depending on such study charac-
teristics as the design type and how the variables implied
were measured, the meta-analyst has to select one of the
different effect-size indices and apply it to all of the studies
of the meta-analysis (Grissom & Kim, 2005). So, when the
dependent variable is continuous and the purpose of each
study is to compare the performance between two groups,
the standardized mean difference is the most usual effect-
size index (Cooper, 1998; Hedges & Olkin, 1985). If the
dependent variable is dichotomous or has been dichoto-
mized, then effect-size indices such as an odds ratio (or its
log transformation), a risk ratio (or its log transformation),
or a risk difference can be applied (Egger, Smith, & Altman,
2001; Haddock, Rindskopf, & Shadish, 1998; Sa´nchez-
Meca, Marı´n-Martı´nez, & Chaco´n-Moscoso, 2003). If all of
the variables are continuous, then an effect-size index from
the r family can be applied, such as the Pearson correlation
coefficient or its Fisher’s Z transformation (Hunter &
Schmidt, 2004; Rosenthal, 1991; Rosenthal, Rosnow, &
Rubin, 2000).
In general, the statistical analysis usually applied in meta-
analysis has three main objectives: (a) to estimate the over-
all effect size of the population to which the studies pertain;
(b) to assess if the heterogeneity found among the effect
estimates can be explained by chance alone or if, on the
contrary, the individual studies exhibited true heterogeneity,
that is, variability produced by real differences among the
population effect sizes; and, (c) if heterogeneity cannot be
explained by sampling error alone, to search for study
characteristics that could operate as moderator variables of
the effect estimates. Our focus in this article was the first
objective, that is, to estimate the population effect size.
To estimate the population effect size from a set of
individual studies, an average of the effect estimates is
calculated by weighting each one of them by its inverse
variance, and a confidence interval (CI) is thus obtained
Julio Sa´nchez-Meca and Fulgencio Marı´n-Martı´nez, Depart-
ment of Basic Psychology and Methodology, Faculty of Psychol-
ogy, Espinardo Campus, University of Murcia, Murcia, Spain.
This article was supported by a grant from the Ministerio de
Educacio´n y Ciencia of the Spanish Government and by Fondo
Europeo de Desarrollo Regional funds for Project No. SEJ2004-
07278/PSIC.
Correspondence concerning this article should be addressed to
Julio Sa´nchez-Meca, Department of Basic Psychology and Meth-
odology, Faculty of Psychology, Espinardo Campus, University of
Murcia, 30100-Murcia, Spain. E-mail: jsmeca@um.es
Psychological Methods
2008, Vol. 13, No. 1, 31– 48
Copyright 2008 by the American Psychological Association
1082-989X/08/$12.00 DOI: 10.1037/1082-989X.13.1.31
31

around it. Most of the effect-size indices usually applied in
meta-analysis are approximately normally distributed and
their sampling variances can be easily estimated by simple
algebraic formulas (Fleiss, 1994; Rosenthal, 1994; Shadish
& Haddock, 1994). As a consequence, meta-analyses typi-
cally calculate a CI for the overall effect size assuming a
standard normal distribution to estimate the population ef-
fect size, with the sampling variance estimated as the in-
verse of the sum of the estimated weights. This procedure
performs well when the effect estimates obtained in the
studies differ among themselves only by sampling error,
that is, when the effect estimates assume a fixed-effects
model or the heterogeneity variance is small. However,
when the underlying statistical model in the meta-analysis is
a random-effects model, the empirical coverage probability
of this CI for the average effect size systematically under-
estimates the nominal confidence level (Brockwell & Gor-
don, 2001, 2007; Sidik & Jonkman, 2002).
In recent years, the random-effects model has been con-
sidered the most realistic statistical model in meta-analysis
(Field, 2001, 2003; Hedges & Vevea, 1998; Overton, 1998;
Raudenbush, 1994). Therefore, to obtain CIs for the overall
effect size with a good coverage probability is an important
issue. Our purpose in writing this article was to compare the
performances of three alternative CI procedures with that
based on the standard normal distribution to estimate the
overall effect size when the underlying statistical model is a
random-effects model. Moreover, we also examined
whether different heterogeneity variance estimators affect
the coverage probability of the CIs for the overall effect
size. Thus, we started from the idea that a good CI proce-
dure to estimate an overall effect size should offer good
coverage, that is, close to nominal, and the coverage should
not be affected by the value of the heterogeneity variance,
by the heterogeneity variance estimator used in the meta-
analysis, or by the number of studies. The four CI proce-
dures analyzed here are very simple to calculate, not requir-
ing iterative numerical computation. Other methods of
obtaining CIs that are computationally more complex and
are not addressed here are those of Biggerstaff and Tweedie
(1997) or the profile likelihood method of Hardy and
Thompson (1996).
The Random-Effects Model
Let k be a set of independent empirical studies about a
given topic and
ˆ
i
be the effect-size estimate obtained in the
ith study. The underlying statistical model can be repre-
sented as
ˆ
i
i
e
i
, (1)
where e
i
is the sampling error of
ˆ
i
. Usually e
i
is assumed
to be normally distributed, e
i
N(0,
i
2
), with
i
2
being the
within-study variance. The random-effects model assumes
that each single study estimates its own parametric effect
size
i
and, as a consequence,
i
constitutes a random
variable with mean and between-studies variance
2
. The
between-studies variance, also named heterogeneity vari-
ance, represents the variability between the estimated effect
sizes due not to within-study sampling error but to true
heterogeneity among the studies. In other words, the heter-
ogeneity variance represents the variability produced by the
influence of the differential characteristics of the studies,
such as the design quality, the characteristics of the subjects
in the samples, or differences in the program implementa-
tion. This implies that each parametric effect size,
i
, can be
decomposed as
i
ε
i
, (2)
with ε
i
representing the difference between the parametric
effect size of the ith study,
i
, and the parametric mean, .
The errors ε
i
are usually assumed to be normally distributed,
with heterogeneity variance
2
, ε
i
N(0,
2
). It is also
assumed that the errors e
i
and ε
i
are independent. So, com
-
bining Equations 1 and 2 enables us to formulate the ran-
dom-effects model as
ˆ
i
e
i
ε
i
, (3)
and, as a consequence, the estimated effect sizes
ˆ
i
are
assumed to be normally distributed with mean and vari-
ance
2
⫹␴
i
2
,
ˆ
i
N(,
2
⫹␴
i
2
).
When there is not true heterogeneity, then the between-
studies variance is zero,
2
0, and the random-effects
model becomes a fixed-effects model, that is, all of the
individual studies estimate the same parametric effect size
1
⫽␪
2
... ⫽␪
k
⫽␮⫽␪. In this case, Equation 3
simplifies to
ˆ
i
e
i
, and the effect estimates
ˆ
i
are
assumed to be normally distributed with mean and vari-
ance
i
2
,
ˆ
i
N(,
i
2
). Thus, the fixed-effects model can be
considered a particular case of the random-effects model
when differences among the effect estimates are only due to
sampling error. Both models, those of random and fixed
effects, can be extended to include moderator variables.
They are not presented here, however, as our purpose is to
compare the performance of different procedures to calcu-
late a CI around the overall effect size.
CIs for the Overall Effect Size
One of the main objectives in meta-analysis is to obtain
an average effect-size estimate from a set of independent
effect-size estimates and to calculate a CI around it to
estimate the parametric effect size, . In practice, the stud-
ies included in a meta-analysis have different sample sizes
and, as a consequence, the precision of the effect-size esti-
mates varies among them. A good estimator of the mean
32
SA
´
NCHEZ-MECA AND MARI
´
N-MARTI
´
NEZ

parametric effect size should take into account the precision
of the effect estimates. The most usual procedure to achieve
this objective consists of weighting each effect-size estimate
by its inverse variance. In a random-effects model, the
uniformly minimum variance unbiased estimator (UMVU)
of is given by
ˆ
UMVU
i
w
i
ˆ
i
i
w
i
(4)
(Viechtbauer, 2005), with w
i
being the optimal or true
weights w
i
1/共␶
2
i
2
. The sampling variance of
ˆ
UMVU
is given by
V共␮ˆ
UMVU
1
i
w
i
. (5)
If, in a meta-analysis, the population sampling variance of
each study,
i
2
, and the population heterogeneity variance,
2
, are known, then ˆ
UMVU
can be calculated and, as it is
asymptotically normally distributed, a 100(1 ⫺␣)% CI
assuming a standard normal distribution can be calculated
by
ˆ
UMVU
z
1 /2
V共␮ˆ
UMVU
), (6)
where z
1⫺␣/2
is the 100(1 ⫺␣/2) percentile of the standard
normal distribution, being the significance level.
The z Distribution CI
In practice, neither the parametric heterogeneity variance,
2
, nor the parametric sampling variances of the single
studies,
i
2
, are known. Therefore, they have to be estimated
from the data reported in the studies. This means that
Equation 6 cannot ever be applied. For most of the effect-
size indices usually applied in meta-analysis, unbiased es-
timators of the sampling variance, ˆ
i
2
, have been derived,
and several estimators can be found in the literature to
estimate the heterogeneity variance in a meta-analysis, ˆ
2
(Sidik & Jonkman, 2007; Viechtbauer, 2005).
Once we have an unbiased sampling variance estimator,
ˆ
i
2
, to be applied in each study and a heterogeneity variance
estimator, ˆ
2
, the optimal weights, w
i
, can be estimated by
wˆ
i
1/共␶ˆ
2
ˆ
i
2
. Therefore, the formula for estimating
the parametric mean effect size, , in meta-analysis is given
by
ˆ
i
wˆ
i
ˆ
i
i
wˆ
i
, (7)
and its sampling variance is usually estimated as
V
ˆ
共␮ˆ
1
i
wˆ
i
. (8)
The typical procedure to calculate a CI around an overall
effect size assumes a standard normal distribution and es-
timates the sampling variance of ˆ by Equation 8. Here we
refer to this procedure as the z distribution CI, which is
obtained by
ˆ z
1 /2
V
ˆ
共␮ˆ . (9)
However, this procedure does not take into account the
uncertainty produced by the fact that the within-study and
the between-studies variances have to be estimated (Bigger-
staff & Tweedie, 1997). As Sidik and Jonkman (2003) have
contended, “The normality assumption for ˆ is not strictly
true in practice (nor is V
ˆ
(ˆ ) the true variance), because the
wˆ
i
values are estimates. Nonetheless, this is the commonly
used practice for constructing CIs” (p. 1196). The main
consequence of assuming a standard normal distribution to
obtain a CI for ˆ with Equation 9 is that its actual coverage
probability is smaller than the nominal confidence level, the
width of the CI being too narrow. As Viechtbauer (2005)
has shown, estimating the optimal weights, w
i
, using unbi
-
ased estimates of
2
and
i
2
,
results in an estimate of the sampling variance of ˆ that is
negatively biased. As a consequence of this negative bias, the
sampling variance of ˆ will be underestimated on average, and
researchers will attribute unwarranted precision to their estimate
of . (p. 263)
Moreover, several Monte Carlo studies have shown that the
underestimation of the nominal confidence level with the z
distribution CI is more severe as the between-studies vari-
ance increases and as the number of studies decreases. The
z distribution CI only presents good coverage probability in
meta-analyses with a large number of studies and very little
or zero heterogeneity variance (Brockwell & Gordon, 2001,
2007; Follmann & Proschan, 1999; Hartung & Makambi,
2003; Makambi, 2004; Sidik & Jonkman, 2002, 2003, 2005,
2006).
The t Distribution CI
To solve the problems of coverage probability with the z
distribution CI, it has been proposed in the literature (Foll-
mann & Proschan, 1999; Hartung & Makambi, 2002) to
assume a Student t reference distribution with k 1 degrees
of freedom, instead of the standard normal distribution, and
to estimate the sampling variance of ˆ in Equation 8 with
ˆ t
k 1,1 /2
V
ˆ
共␮ˆ , (10)
33
CONFIDENCE INTERVALS IN META-ANALYSIS

with t
k1, 1⫺␣/2
being the 100(1 ⫺␣/2) percentile of the t
distribution with k 1 degrees of freedom. Here we refer to
this procedure as the t distribution CI. Using a t distribution
produces CIs that are wider than those of the standard
normal distribution, in particular for meta-analyses with a
small number of studies, and, consequently, this should
improve the coverage probability, as Follmann and Pros-
chan (1999) have found.
The Weighted Variance CI
One procedure that has not yet widely been used in
meta-analysis is that proposed by Hartung (1999), which
consists of calculating a CI for the overall effect size as-
suming a Student t distribution with k 1 degrees of
freedom and estimating the sampling variance of ˆ with a
weighted extension of the usual formula, V
ˆ
w
(ˆ):
V
ˆ
w
共␮ˆ
i
wˆ
i
共␪
ˆ
i
ˆ
2
k 1
i
wˆ
i
, (11)
where wˆ
i
1/共␶ˆ
2
ˆ
i
2
and ˆ is the overall effect size
defined in Equation 7 assuming a random-effects model. It
can be shown that the statistic (ˆ ⫺␮)/
V
ˆ
w
(ˆ)
is approx
-
imately distributed as a t distribution with k 1 degrees of
freedom (Hartung, 1999; Sidik & Jonkman, 2002). There-
fore, a CI around the overall effect size can be computed by
ˆ t
k 1,1 /2
V
ˆ
w
(ˆ ). (12)
Following Sidik and Jonkman (2003, 2006), here we refer to
this procedure as the weighted variance CI. Previous sim-
ulations seem to offer good coverage of this procedure when
the effect-size index is the log odds ratio (Makambi, 2004;
Sidik & Jonkman, 2002, 2006), the standardized mean dif-
ference (Sidik & Jonkman, 2003), and the unstandardized
mean difference and the risk difference (Hartung & Maka-
mbi, 2003). In particular, the weighted variance CI offers a
better coverage probability than the z distribution CI except
when the between-studies variance is zero,
2
0 (Hartung,
1999; Hartung & Makambi, 2003; Sidik & Jonkman, 2002,
2003).
The Quantile Approximation (QA) Method
The fourth method of calculating a CI for the overall
effect size that is included in this study has been recently
proposed by Brockwell and Gordon (2007). The method
consists of approximating, by means of intensive computa-
tion, the quantiles of the distribution of the statistic M
共␮ˆ ␮兲/
V
ˆ
共␮ˆ and then using the 100(1 ⫺␣/2)%
percentile of the M distribution to calculate a CI for the
overall effect size by
ˆ b
1 /2
V
ˆ
共␮ˆ (13)
(Brockwell & Gordon, 2007, p. 4538), where V
ˆ
(ˆ ) is the
usual formula to estimate the sampling variance of ˆ , de-
fined in Equation 8, and b
1⫺␣/2
is the 100(1 ⫺␣/2)%
percentile of the distribution of M empirically approached
by Monte Carlo simulation. Unlike the other three proce-
dures for calculating a CI for the overall effect size in a
random-effects meta-analysis, the critical values in the
Brockwell and Gordon (2007) method are obtained by sim-
ulating thousands of meta-analyses from a random-effects
model and varying the number of studies between 2 and 30
and the heterogeneity variance between 0 and 0.5. The
effect-size index that they used in the simulations was the
log odds ratio, as it is a very common effect estimator in the
medical literature. Once Brockwell and Gordon (2007) ob-
tained the observed values for the quantiles 100(/2)% and
100(1 ⫺␣/2)% of the M statistic, they adjusted a regression
equation for the quantiles as a function of the number of
studies, k:
b
1⫺␣/2
2.061
4.902
k
0.756
k
0.958
lnk
(14)
(Brockwell & Gordon, 2007, p. 4538). Thus, the critical
values, b
1⫺␣/2
, to be used in the CI formula (Equation 13) of
Brockwell and Gordon (2007) are estimated from Equation
14. For example, if a meta-analysis has k 10 studies, then
the corresponding critical value for a 95% nominal confi-
dence level is b
.975
2.374. Here we refer to this procedure
as the QA method. Brockwell and Gordon (2007) have
found a better performance of this procedure than those of
the z and t distribution CIs, using the DerSimonian and
Laird (1986) estimator of the heterogeneity variance, but
they did not compare the QA method with the weighted
variance CI.
Heterogeneity Variance Estimators
To calculate a CI around the overall effect size in a
meta-analysis where a random-effects model is assumed, an
estimate of the heterogeneity variance is needed. Although
meta-analyses typically use the heterogeneity variance es-
timator proposed by DerSimonian and Laird (1986), alter-
native estimators have been proposed that seem to offer
better properties than the usual estimator. Some of the
alternatives are based on noniterative estimation proce-
dures, whereas others require iterative computations. Dif-
ferent heterogeneity variance estimators differ in respect to
such statistical properties as bias and mean square error
(Sidik & Jonkman, 2007; Viechtbauer, 2005, 2007), and an
issue that has not yet been widely studied is whether the
selection of the heterogeneity variance estimator has an
effect on the performance of different CIs for the overall
34
SA
´
NCHEZ-MECA AND MARI
´
N-MARTI
´
NEZ

effect size. Next, we present formulas to calculate eight
different heterogeneity variance estimators that could be
used to obtain CIs for the overall effect size under a ran-
dom-effects model.
Hunter and Schmidt (HS) Estimator
Hunter and Schmidt (1990, pp. 285–286; see also Hunter
& Schmidt, 2004, pp. 287–288) proposed to estimate the
heterogeneity variance by calculating the difference be-
tween the total variance of the effect estimates and an
average of the estimated within-study variances, ˆ
i
2
.A
simplified formula of this estimator is given by
ˆ
HS
2
Q k
i
wˆ
i
FE
, (15)
where wˆ
i
FE
1/ˆ
i
2
is the inverse variance of the ith study
assuming a fixed-effects model, with ˆ
i
2
being the within-
study variance estimate for the ith study. Q is the heteroge-
neity statistic usually applied to test the homogeneity hy-
pothesis (Hedges & Olkin, 1985):
Q
i
wˆ
i
FE
共␪
ˆ
i
ˆ
FE
2
, (16)
with ˆ
FE
being the mean effect size, assuming a fixed-
effects model; that is,
ˆ
FE
i
wˆ
i
FE
ˆ
i
i
wˆ
i
FE
. (17)
If Q k, then ˆ
HS
2
is negative and, as a consequence, it has
to be truncated to zero.
Hedges (HE) Estimator
The HE estimator of the population heterogeneity vari-
ance consists of calculating the difference between an un-
weighted estimate of the total variance of the effect sizes
and an unweighted estimate of the average within-study
variance (Hedges, 1983, p. 391; see also Hedges & Olkin,
1985, p. 194):
ˆ
HE
2
i
共␪
ˆ
i
ˆ
UW
2
k 1
1
k
i
ˆ
i
2
, (18)
where ˆ
UW
is an unweighted mean of the effect sizes
ˆ
UW
i
ˆ
i
k
. (19)
As ˆ
HE
2
is not a nonnegative heterogeneity variance estima
-
tor, it has to be truncated to zero when ˆ
HE
2
0.
DerSimonian and Laird (DL) Estimator
The heterogeneity variance estimator usually applied in
the meta-analytic literature is that proposed by DerSimonian
and Laird’s (1986) estimator, which is based on the mo-
ments method, consists of estimating the population heter-
ogeneity variance by
ˆ
DL
2
Q k 1
c
, (20)
where Q is the heterogeneity statistic defined in Equation 16
and c is given by
c
i
wˆ
i
FE
i
wˆ
i
FE
2
i
wˆ
i
FE
. (21)
When Q (k 1), then ˆ
DL
2
is negative and, like ˆ
HS
2
and
ˆ
HE
2
, it has to be truncated to zero.
Malzahn, Bo¨hning, and Holling (MBH) Estimator
Malzahn, Bo¨hning, and Holling (2000) proposed a moment-
based nonparametric estimator of the population heteroge-
neity variance specifically designed to be used only with the
standardized mean difference, d. It is also based on the
difference of an estimate of the total variance of the d
indices and an estimate of the average within-study variance
of the d indices. It is obtained by
ˆ
MBH
2
i
1
i
兲共␪
ˆ
i
ˆ
FE
2
k 1
1
k
i
N
i
n
Ei
n
Ci
1
k
i
i
ˆ
i
2
(22)
(Malzahn et al., 2000, p. 622; see also Malzahn, 2003), with
N
i
n
Ei
n
Ci
being the total sample size of the ith study;
ˆ
FE
was defined in Equation 17,
ˆ
i
is the d index for the ith
study, and
i
is given by
i
1
N
i
4
cm
i
兲兴
2
N
i
2
, (23)
with c(m
i
) being the correction factor of the d index for
small sample sizes, defined in Equation 33. Applications of
this estimator are limited to meta-analyses where the effect-
size index is the d index. When ˆ
MBH
2
has a negative value,
it is truncated to zero.
35
CONFIDENCE INTERVALS IN META-ANALYSIS

Figures
Citations
More filters
Journal ArticleDOI

Outlier and influence diagnostics for meta‐analysis

TL;DR: Standard diagnostic procedures developed for linear regression analyses are extended to the meta-analytic fixed- and random/mixed-effects models to illustrate the usefulness of these procedures in various research settings.
Journal ArticleDOI

The Hartung-Knapp-Sidik-Jonkman method for random effects meta-analysis is straightforward and considerably outperforms the standard DerSimonian-Laird method

TL;DR: The authors' simulations showed that the HKSJ method consistently results in more adequate error rates than the DL method, especially when the number of studies is small, and can easily be applied routinely in meta-analyses.
Journal ArticleDOI

Methods to estimate the between-study variance and its uncertainty in meta-analysis

TL;DR: The aim is to identify known methods for estimation of the between‐study variance and its corresponding uncertainty, and to summarise the simulation and empirical evidence that compares them and recommend the Q‐profile method and the alternative approach based on a ‘generalised Cochran between‐ study variance statistic’.
Journal ArticleDOI

Conducting indirect-treatment-comparison and network-meta-analysis studies: report of the ISPOR Task Force on Indirect Treatment Comparisons Good Research Practices: part 2.

TL;DR: This report from the International Society for Pharmacoeconomics and Outcomes Research Indirect Treatment Comparisons Good Research Practices Task Force provides guidance on technical aspects of conducting network meta-analyses (the authors' use of this term includes most methods that involve meta-analysis in the context of a network of evidence).
References
More filters
Book

Statistical Power Analysis for the Behavioral Sciences

TL;DR: The concepts of power analysis are discussed in this paper, where Chi-square Tests for Goodness of Fit and Contingency Tables, t-Test for Means, and Sign Test are used.
Journal ArticleDOI

Meta-Analysis in Clinical Trials*

TL;DR: This paper examines eight published reviews each reporting results from several related trials in order to evaluate the efficacy of a certain treatment for a specified medical condition and suggests a simple noniterative procedure for characterizing the distribution of treatment effects in a series of studies.
Book

Statistical Methods for Meta-Analysis

TL;DR: In this article, the authors present a model for estimating the effect size from a series of experiments using a fixed effect model and a general linear model, and combine these two models to estimate the effect magnitude.
Journal ArticleDOI

Statistical Methods for Meta-Analysis.

TL;DR: In this paper, the authors present a model for estimating the effect size from a series of experiments using a fixed effect model and a general linear model, and combine these two models to estimate the effect magnitude.
Related Papers (5)
Frequently Asked Questions (13)
Q1. What have the authors contributed in "Confidence intervals for the overall effect size in random-effects meta-analysis" ?

But this procedure does not take into account the uncertainty due to the fact that the heterogeneity variance ( ) and the within-study variances have to be estimated, leading to CIs that are too narrow with the consequence that the actual coverage probability is smaller than the nominal confidence level. In this article, the performances of 3 alternatives to the standard CI procedure are examined under a random-effects model and 8 different 2 estimators to estimate the weights: the t distribution CI, the weighted variance CI ( with an improved variance ), and the quantile approximation method ( recently proposed ). 

The overstatement of the nominal confidence level obtained with the HM and SJ estimators was due to the fact that both of them are nonnegative estimators of 2 and, as a consequence, when 2 0, they are positively biased, leading to CIs that are too wide. 

It is expected that the performance of the weighted variance CI obtained with the d index will be similar to that of meta-analyses that use other effect-size indices, provided they are relatively unbiased and follow an approximately normal distribution. 

The main consequence of assuming a standard normal distribution to obtain a CI for ̂ with Equation 9 is that its actual coverage probability is smaller than the nominal confidence level, the width of the CI being too narrow. 

in real meta-analyses, the only weights that can be obtained are the estimated weights, ŵi, which have been calculated here for eight different heterogeneity variance estimators. 

Once the authors have an unbiased sampling variance estimator, ̂i 2, to be applied in each study and a heterogeneity variance estimator, ̂2, the optimal weights, wi, can be estimated by ŵi 1/ ̂ 2 ̂i 2 . 

Another estimator of the heterogeneity variance in metaanalysis, recently proposed by Sidik and Jonkman (2005), also yields nonnegative values. 

previous simulation stud-ies have not manipulated the parametric mean effect size, , because it is expected that CIs calculated from z and t distributions should be invariant to a location shift (Brockwell & Gordon, 2001; Sidik & Jonkman, 2005). 

The second iterative estimator of the heterogeneity variance in a random-effects model is based on restricted maximum likelihood estimation (REML). 

Heterogeneity Variance EstimatorsTo calculate a CI around the overall effect size in a meta-analysis where a random-effects model is assumed, an estimate of the heterogeneity variance is needed. 

Although the authors have focused on how to obtain a CI for the overall effect size, another advantage of assuming a t distribution for ̂ with k – 1 degrees of freedom and the weighted sampling variance, V̂w(̂), is that it is possible to test the null hypothesis of a parametric effect size equal to zero (H0: 0) with the test statistic T ̂/ V̂w ̂ . 

For most of the effectsize indices usually applied in meta-analysis, unbiased estimators of the sampling variance, ̂i2, have been derived, and several estimators can be found in the literature to estimate the heterogeneity variance in a meta-analysis, ̂2 (Sidik & Jonkman, 2007; Viechtbauer, 2005). 

The sample size distribution used in their simulations was obtained from a review of the meta-analyses published in 18 international psychological journals, with a Pearson skewness index of 1.464 (for more details, see Sánchez-Meca & Marı́nMartı́nez, 1998).