scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Conducting Meta-Analyses in R with the metafor Package

05 Aug 2010-Journal of Statistical Software (University of California Press)-Vol. 36, Iss: 3, pp 1-48
TL;DR: The metafor package provides functions for conducting meta-analyses in R and includes functions for fitting the meta-analytic fixed- and random-effects models and allows for the inclusion of moderators variables (study-level covariates) in these models.
Abstract: The metafor package provides functions for conducting meta-analyses in R. The package includes functions for fitting the meta-analytic fixed- and random-effects models and allows for the inclusion of moderators variables (study-level covariates) in these models. Meta-regression analyses with continuous and categorical moderators can be conducted in this way. Functions for the Mantel-Haenszel and Peto's one-step method for meta-analyses of 2 x 2 table data are also available. Finally, the package provides various plot functions (for example, for forest, funnel, and radial plots) and functions for assessing the model fit, for obtaining case diagnostics, and for tests of publication bias.
Citations
More filters
Journal ArticleDOI
28 Aug 2015-Science
TL;DR: A large-scale assessment suggests that experimental reproducibility in psychology leaves a lot to be desired, and correlational tests suggest that replication success was better predicted by the strength of original evidence than by characteristics of the original and replication teams.
Abstract: Reproducibility is a defining feature of science, but the extent to which it characterizes current research is unknown. We conducted replications of 100 experimental and correlational studies published in three psychology journals using high-powered designs and original materials when available. Replication effects were half the magnitude of original effects, representing a substantial decline. Ninety-seven percent of original studies had statistically significant results. Thirty-six percent of replications had statistically significant results; 47% of original effect sizes were in the 95% confidence interval of the replication effect size; 39% of effects were subjectively rated to have replicated the original result; and if no bias in original results is assumed, combining original and replication results left 68% with statistically significant effects. Correlational tests suggest that replication success was better predicted by the strength of original evidence than by characteristics of the original and replication teams.

5,532 citations


Cites methods from "Conducting Meta-Analyses in R with ..."

  • ...We conducted fixed-effect meta-analyses using the R packagemetafor (27) on Fisher-transformed correlations for all study-pairs in subset MA and on study-pairs with the odds ratio as the dependent variable....

    [...]

Journal ArticleDOI
TL;DR: A review of 13 years of research into antecedents of university students' grade point average (GPA) scores generated a comprehensive, conceptual map of known correlates of tertiary GPA; assessment of the magnitude of average, weighted correlations with GPA; and tests of multivariate models of GPA correlates within and across research domains.
Abstract: A review of 13 years of research into antecedents of university students' grade point average (GPA) scores generated the following: a comprehensive, conceptual map of known correlates of tertiary GPA; assessment of the magnitude of average, weighted correlations with GPA; and tests of multivariate models of GPA correlates within and across research domains. A systematic search of PsycINFO and Web of Knowledge databases between 1997 and 2010 identified 7,167 English-language articles yielding 241 data sets, which reported on 50 conceptually distinct correlates of GPA, including 3 demographic factors and 5 traditional measures of cognitive capacity or prior academic performance. In addition, 42 non-intellective constructs were identified from 5 conceptually overlapping but distinct research domains: (a) personality traits, (b) motivational factors, (c) self-regulatory learning strategies, (d) students' approaches to learning, and (e) psychosocial contextual influences. We retrieved 1,105 independent correlations and analyzed data using hypothesis-driven, random-effects meta-analyses. Significant average, weighted correlations were found for 41 of 50 measures. Univariate analyses revealed that demographic and psychosocial contextual factors generated, at best, small correlations with GPA. Medium-sized correlations were observed for high school GPA, SAT, ACT, and A level scores. Three non-intellective constructs also showed medium-sized correlations with GPA: academic self-efficacy, grade goal, and effort regulation. A large correlation was observed for performance self-efficacy, which was the strongest correlate (of 50 measures) followed by high school GPA, ACT, and grade goal. Implications for future research, student assessment, and intervention design are discussed.

2,370 citations


Cites methods from "Conducting Meta-Analyses in R with ..."

  • ...For all analyses, we used the package Metafor in R (Viechtbauer, 2010), Field and Gillett’s (2010) macros, and Cheung’s (2009) LISREL syntax generator....

    [...]

Journal ArticleDOI
TL;DR: The findings suggest that mental disorders affect a significant number of children and adolescents worldwide and the pooled prevalence estimates and the identification of sources of heterogeneity have important implications to service, training, and research planning around the world.
Abstract: Background The literature on the prevalence of mental disorders affecting children and adolescents has expanded significantly over the last three decades around the world. Despite the field having matured significantly, there has been no meta-analysis to calculate a worldwide-pooled prevalence and to empirically assess the sources of heterogeneity of estimates. Methods We conducted a systematic review of the literature searching in PubMed, PsycINFO, and EMBASE for prevalence studies of mental disorders investigating probabilistic community samples of children and adolescents with standardized assessments methods that derive diagnoses according to the DSM or ICD. Meta-analytical techniques were used to estimate the prevalence rates of any mental disorder and individual diagnostic groups. A meta-regression analysis was performed to estimate the effect of population and sample characteristics, study methods, assessment procedures, and case definition in determining the heterogeneity of estimates. Results We included 41 studies conducted in 27 countries from every world region. The worldwide-pooled prevalence of mental disorders was 13.4% (CI 95% 11.3–15.9). The worldwide prevalence of any anxiety disorder was 6.5% (CI 95% 4.7–9.1), any depressive disorder was 2.6% (CI 95% 1.7–3.9), attention-deficit hyperactivity disorder was 3.4% (CI 95% 2.6–4.5), and any disruptive disorder was 5.7% (CI 95% 4.0–8.1). Significant heterogeneity was detected for all pooled estimates. The multivariate metaregression analyses indicated that sample representativeness, sample frame, and diagnostic interview were significant moderators of prevalence estimates. Estimates did not vary as a function of geographic location of studies and year of data collection. The multivariate model explained 88.89% of prevalence heterogeneity, but residual heterogeneity was still significant. Additional meta-analysis detected significant pooled difference in prevalence rates according to requirement of funcional impairment for the diagnosis of mental disorders. Conclusions Our findings suggest that mental disorders affect a significant number of children and adolescents worldwide. The pooled prevalence estimates and the identification of sources of heterogeneity have important implications to service, training, and research planning around the world.

2,219 citations

References
More filters
Journal ArticleDOI
13 Sep 1997-BMJ
TL;DR: Funnel plots, plots of the trials' effect estimates against sample size, are skewed and asymmetrical in the presence of publication bias and other biases Funnel plot asymmetry, measured by regression analysis, predicts discordance of results when meta-analyses are compared with single large trials.
Abstract: Objective: Funnel plots (plots of effect estimates against sample size) may be useful to detect bias in meta-analyses that were later contradicted by large trials. We examined whether a simple test of asymmetry of funnel plots predicts discordance of results when meta-analyses are compared to large trials, and we assessed the prevalence of bias in published meta-analyses. Design: Medline search to identify pairs consisting of a meta-analysis and a single large trial (concordance of results was assumed if effects were in the same direction and the meta-analytic estimate was within 30% of the trial); analysis of funnel plots from 37 meta-analyses identified from a hand search of four leading general medicine journals 1993-6 and 38 meta-analyses from the second 1996 issue of the Cochrane Database of Systematic Reviews . Main outcome measure: Degree of funnel plot asymmetry as measured by the intercept from regression of standard normal deviates against precision. Results: In the eight pairs of meta-analysis and large trial that were identified (five from cardiovascular medicine, one from diabetic medicine, one from geriatric medicine, one from perinatal medicine) there were four concordant and four discordant pairs. In all cases discordance was due to meta-analyses showing larger effects. Funnel plot asymmetry was present in three out of four discordant pairs but in none of concordant pairs. In 14 (38%) journal meta-analyses and 5 (13%) Cochrane reviews, funnel plot asymmetry indicated that there was bias. Conclusions: A simple analysis of funnel plots provides a useful test for the likely presence of bias in meta-analyses, but as the capacity to detect bias will be limited when meta-analyses are based on a limited number of small trials the results from such analyses should be treated with considerable caution. Key messages Systematic reviews of randomised trials are the best strategy for appraising evidence; however, the findings of some meta-analyses were later contradicted by large trials Funnel plots, plots of the trials9 effect estimates against sample size, are skewed and asymmetrical in the presence of publication bias and other biases Funnel plot asymmetry, measured by regression analysis, predicts discordance of results when meta-analyses are compared with single large trials Funnel plot asymmetry was found in 38% of meta-analyses published in leading general medicine journals and in 13% of reviews from the Cochrane Database of Systematic Reviews Critical examination of systematic reviews for publication and related biases should be considered a routine procedure

37,989 citations

Journal ArticleDOI
TL;DR: This paper examines eight published reviews each reporting results from several related trials in order to evaluate the efficacy of a certain treatment for a specified medical condition and suggests a simple noniterative procedure for characterizing the distribution of treatment effects in a series of studies.

33,234 citations


"Conducting Meta-Analyses in R with ..." refers methods in this paper

  • ...…Hunter-Schmidt estimator (Hunter and Schmidt 2004), the Hedges estimator (Hedges and Olkin 1985; Raudenbush 2009), the DerSimonian-Laird estimator (DerSimonian and Laird 1986; Raudenbush 2009), the Sidik-Jonkman estimator (Sidik and Jonkman 2005a,b), the maximum-likelihood or restricted…...

    [...]

Journal ArticleDOI
TL;DR: It is concluded that H and I2, which can usually be calculated for published meta-analyses, are particularly useful summaries of the impact of heterogeneity, and one or both should be presented in publishedMeta-an analyses in preference to the test for heterogeneity.
Abstract: The extent of heterogeneity in a meta-analysis partly determines the difficulty in drawing overall conclusions. This extent may be measured by estimating a between-study variance, but interpretation is then specific to a particular treatment effect metric. A test for the existence of heterogeneity exists, but depends on the number of studies in the meta-analysis. We develop measures of the impact of heterogeneity on a meta-analysis, from mathematical criteria, that are independent of the number of studies and the treatment effect metric. We derive and propose three suitable statistics: H is the square root of the chi2 heterogeneity statistic divided by its degrees of freedom; R is the ratio of the standard error of the underlying mean from a random effects meta-analysis to the standard error of a fixed effect meta-analytic estimate, and I2 is a transformation of (H) that describes the proportion of total variation in study estimates that is due to heterogeneity. We discuss interpretation, interval estimates and other properties of these measures and examine them in five example data sets showing different amounts of heterogeneity. We conclude that H and I2, which can usually be calculated for published meta-analyses, are particularly useful summaries of the impact of heterogeneity. One or both should be presented in published meta-analyses in preference to the test for heterogeneity.

25,460 citations


"Conducting Meta-Analyses in R with ..." refers background in this paper

  • ...Various measures for facilitating the interpretation of the estimated amount of heterogeneity were suggested by Higgins and Thompson (2002). The I2 statistic estimates (in percent) how much of the total variability in the effect size estimates (which is composed of heterogeneity and sampling variability) can be attributed to heterogeneity among the true effects (τ̂2 = 0 therefore implies I2 = 0%)....

    [...]

Journal ArticleDOI
TL;DR: In this paper, the role and limitations of retrospective investigations of factors possibly associated with the occurrence of a disease are discussed and their relationship to forward-type studies emphasized, and examples of situations in which misleading associations could arise through the use of inappropriate control groups are presented.
Abstract: The role and limitations of retrospective investigations of factors possibly associated with the occurrence of a disease are discussed and their relationship to forward-type studies emphasized. Examples of situations in which misleading associations could arise through the use of inappropriate control groups are presented. The possibility of misleading associations may be minimized by controlling or matching on factors which could produce such associations; the statistical analysis will then be modified. Statistical methodology is presented for analyzing retrospective study data, including chi-square measures of statistical significance of the observed association between the disease and the factor under study, and measures for interpreting the association in terms of an increased relative risk of disease. An extension of the chi-square test to the situation where data are subclassified by factors controlled in the analysis is given. A summary relative risk formula, R, is presented and discussed in connection with the problem of weighting the individual subcategory relative risks according to their importance or their precision. Alternative relative-risk formulas, R I , R2, Ra, and R4/ which require the calculation of subcategory-adjusted proportions ot the study factor among diseased persons and controls for the computation of relative risks, are discussed. While these latter formulas may be useful in many instances, they may be biased or inconsistent and are not, in fact, overages of the relative risks observed in the separate subcategories. Only the relative-risk formula, R, of those presented, can be viewed as such an average. The relationship of the matched-sample method to the subclassification approach is indicated. The statistical methodolo~y presented is illustrated with examples from a study of women with epidermoid and undifferentiated pulmonary ccrclnomc.e-J. Nat. Cancer Inst, 22: 719748, 1959.

14,433 citations


"Conducting Meta-Analyses in R with ..." refers methods in this paper

  • ...When analyzing odds ratios, the Cochran-MantelHaenszel test (Mantel and Haenszel 1959; Cochran 1985) and Tarone’s test for heterogeneity (Tarone 1985) are also provided....

    [...]

  • ...Alternative methods for fitting the fixed-effects model for 2 × 2 table data are the MantelHaenszel and Peto’s one-step method (Mantel and Haenszel 1959; Yusuf et al. 1985)....

    [...]

Journal ArticleDOI
TL;DR: In this paper, an adjusted rank correlation test is proposed as a technique for identifying publication bias in a meta-analysis, and its operating characteristics are evaluated via simulations, and the test statistic is a direct statistical analogue of the popular funnel-graph.
Abstract: An adjusted rank correlation test is proposed as a technique for identifying publication bias in a meta-analysis, and its operating characteristics are evaluated via simulations. The test statistic is a direct statistical analogue of the popular "funnel-graph." The number of component studies in the meta-analysis, the nature of the selection mechanism, the range of variances of the effect size estimates, and the true underlying effect size are all observed to be influential in determining the power of the test. The test is fairly powerful for large meta-analyses with 75 component studies, but has only moderate power for meta-analyses with 25 component studies. However, in many of the configurations in which there is low power, there is also relatively little bias in the summary effect size estimate. Nonetheless, the test must be interpreted with caution in small meta-analyses. In particular, bias cannot be ruled out if the test is not significant. The proposed technique has potential utility as an exploratory tool for meta-analysts, as a formal procedure to complement the funnel-graph.

13,373 citations