scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Operating characteristics of a rank correlation test for publication bias.

01 Dec 1994-Biometrics (Biometrics)-Vol. 50, Iss: 4, pp 1088-1101
TL;DR: In this paper, an adjusted rank correlation test is proposed as a technique for identifying publication bias in a meta-analysis, and its operating characteristics are evaluated via simulations, and the test statistic is a direct statistical analogue of the popular funnel-graph.
Abstract: An adjusted rank correlation test is proposed as a technique for identifying publication bias in a meta-analysis, and its operating characteristics are evaluated via simulations. The test statistic is a direct statistical analogue of the popular "funnel-graph." The number of component studies in the meta-analysis, the nature of the selection mechanism, the range of variances of the effect size estimates, and the true underlying effect size are all observed to be influential in determining the power of the test. The test is fairly powerful for large meta-analyses with 75 component studies, but has only moderate power for meta-analyses with 25 component studies. However, in many of the configurations in which there is low power, there is also relatively little bias in the summary effect size estimate. Nonetheless, the test must be interpreted with caution in small meta-analyses. In particular, bias cannot be ruled out if the test is not significant. The proposed technique has potential utility as an exploratory tool for meta-analysts, as a formal procedure to complement the funnel-graph.
Citations
More filters
Journal ArticleDOI
13 Sep 1997-BMJ
TL;DR: Funnel plots, plots of the trials' effect estimates against sample size, are skewed and asymmetrical in the presence of publication bias and other biases Funnel plot asymmetry, measured by regression analysis, predicts discordance of results when meta-analyses are compared with single large trials.
Abstract: Objective: Funnel plots (plots of effect estimates against sample size) may be useful to detect bias in meta-analyses that were later contradicted by large trials. We examined whether a simple test of asymmetry of funnel plots predicts discordance of results when meta-analyses are compared to large trials, and we assessed the prevalence of bias in published meta-analyses. Design: Medline search to identify pairs consisting of a meta-analysis and a single large trial (concordance of results was assumed if effects were in the same direction and the meta-analytic estimate was within 30% of the trial); analysis of funnel plots from 37 meta-analyses identified from a hand search of four leading general medicine journals 1993-6 and 38 meta-analyses from the second 1996 issue of the Cochrane Database of Systematic Reviews . Main outcome measure: Degree of funnel plot asymmetry as measured by the intercept from regression of standard normal deviates against precision. Results: In the eight pairs of meta-analysis and large trial that were identified (five from cardiovascular medicine, one from diabetic medicine, one from geriatric medicine, one from perinatal medicine) there were four concordant and four discordant pairs. In all cases discordance was due to meta-analyses showing larger effects. Funnel plot asymmetry was present in three out of four discordant pairs but in none of concordant pairs. In 14 (38%) journal meta-analyses and 5 (13%) Cochrane reviews, funnel plot asymmetry indicated that there was bias. Conclusions: A simple analysis of funnel plots provides a useful test for the likely presence of bias in meta-analyses, but as the capacity to detect bias will be limited when meta-analyses are based on a limited number of small trials the results from such analyses should be treated with considerable caution. Key messages Systematic reviews of randomised trials are the best strategy for appraising evidence; however, the findings of some meta-analyses were later contradicted by large trials Funnel plots, plots of the trials9 effect estimates against sample size, are skewed and asymmetrical in the presence of publication bias and other biases Funnel plot asymmetry, measured by regression analysis, predicts discordance of results when meta-analyses are compared with single large trials Funnel plot asymmetry was found in 38% of meta-analyses published in leading general medicine journals and in 13% of reviews from the Cochrane Database of Systematic Reviews Critical examination of systematic reviews for publication and related biases should be considered a routine procedure

37,989 citations

Journal ArticleDOI
TL;DR: In this review the usual methods applied in systematic reviews and meta-analyses are outlined, and the most common procedures for combining studies with binary outcomes are described, illustrating how they can be done using Stata commands.

31,656 citations

Book
23 Sep 2019
TL;DR: The Cochrane Handbook for Systematic Reviews of Interventions is the official document that describes in detail the process of preparing and maintaining Cochrane systematic reviews on the effects of healthcare interventions.
Abstract: The Cochrane Handbook for Systematic Reviews of Interventions is the official document that describes in detail the process of preparing and maintaining Cochrane systematic reviews on the effects of healthcare interventions.

21,235 citations

Journal ArticleDOI
TL;DR: The metafor package provides functions for conducting meta-analyses in R and includes functions for fitting the meta-analytic fixed- and random-effects models and allows for the inclusion of moderators variables (study-level covariates) in these models.
Abstract: The metafor package provides functions for conducting meta-analyses in R. The package includes functions for fitting the meta-analytic fixed- and random-effects models and allows for the inclusion of moderators variables (study-level covariates) in these models. Meta-regression analyses with continuous and categorical moderators can be conducted in this way. Functions for the Mantel-Haenszel and Peto's one-step method for meta-analyses of 2 x 2 table data are also available. Finally, the package provides various plot functions (for example, for forest, funnel, and radial plots) and functions for assessing the model fit, for obtaining case diagnostics, and for tests of publication bias.

11,237 citations

Journal ArticleDOI
TL;DR: In this paper, a rank-based data augmentation technique is proposed for estimating the number of missing studies that might exist in a meta-analysis and the effect that these studies might have had on its outcome.
Abstract: We study recently developed nonparametric methods for estimating the number of missing studies that might exist in a meta-analysis and the effect that these studies might have had on its outcome. These are simple rank-based data augmentation techniques, which formalize the use of funnel plots. We show that they provide effective and relatively powerful tests for evaluating the existence of such publication bias. After adjusting for missing studies, we find that the point estimate of the overall effect size is approximately correct and coverage of the effect size confidence intervals is substantially improved, in many cases recovering the nominal confidence levels entirely. We illustrate the trim and fill method on existing meta-analyses of studies in clinical trials and psychometrics.

9,163 citations

References
More filters
Journal ArticleDOI
TL;DR: The presence of publication bias in a cohort of clinical research studies is confirmed and it is suggested that conclusions based only on a review of published data should be interpreted cautiously, especially for observational studies.

2,800 citations

Book
24 Jan 1984
TL;DR: A Checklist for Evaluating Reviews Reference Index as discussed by the authors is a checklist for evaluating reviews that is based on a reviewing strategy and a review review strategy that is organized by the division of labor.
Abstract: 1 Introduction 2 Organizing a Reviewing Strategy 3 Quantitative Procedures 4 Numbers and Narrative: The Division of Labor 5 What We Have Learned 6 A Checklist for Evaluating Reviews Reference Index

1,480 citations

Journal ArticleDOI
TL;DR: In this article, the authors summarize the science of reviewing research, including the review process, the review review process itself, and the reviewer's role in reviewing research articles, as well as the process of reviewing the review articles.
Abstract: summing up the science of reviewing research konsool summing up the science of reviewing research cvee summing up the science of reviewing research summing up the science of reviewing research dsuh summing up the science of reviewing research ebook summing up the science of reviewing research summing up the science of reviewing research summing up the science of reviewing research jlip political science senior thesis handbook reed college beauty contest research paper pletts reviewing reviews: 'rer,' research, and the politics of ed 900: systematic reviews of research evidence on program summing it up from one plus one to modern number theory ide 843 dissertation research seminar fall 2016 summing it up: from one plus one to modern number theory analysis and comment researchgate essay paper examples pletts evaluating health services milbank memorial fund is progress speeding up our multiplying multitudes of hero industry research review methodologic guidelines for review papers cancerprev college and research libraries ideals same delfino 35 manual guibot references virginia tech diagnostic and evaluation center ihoney lab manual for database development answers fiores putting it into words an introduction to indirect language education 604: integrative doctoral seminar sources of chinese tradition unesco collection of david burrell pillemer office address: home address document resume ed 385 553 tm 023 970 author murphy bsc 5936.01 autumn 2004 bibliography biological science nutritional therapy practitioner program reading list summing up [kindle edition] by richard j. light;david b from the window part one mdmtv works 3 for windows essentials ekpbs

1,316 citations

Journal ArticleDOI
15 Jan 1992-JAMA
TL;DR: There was evidence of publication bias in that for both institutional review boards there was an association between results reported to be significant and publication and contrary to popular opinion, publication bias originates primarily with investigators, not journal editors.
Abstract: Objective. —To investigate factors associated with the publication of research findings, in particular, the association between "significant" results and publication. Design. —Follow-up study. Setting. —Studies approved in 1980 or prior to 1980 by the two institutional review boards that serve The Johns Hopkins Health Institutions—one that serves the School of Medicine and Hospital and the other that serves the School of Hygiene and Public Health. Population. —A total of 737 studies were followed up. Results. —Of the studies for which analyses had been reported as having been performed at the time of interview, 81% from the School of Medicine and Hospital and 66% from the School of Hygiene and Public Health had been published. Publication was not associated with sample size, presence of a comparison group, or type of study (eg, observational study vs clinical trial). External funding and multiple data collection sites were positively associated with publication. There was evidence of publication bias in that for both institutional review boards there was an association between results reported to be significant and publication (adjusted odds ratio, 2.54; 95% confidence interval, 1.63 to 3.94). Contrary to popular opinion, publication bias originates primarily with investigators, not journal editors: only six of the 124 studies not published were reported to have been rejected for publication. Conclusion. —There is a statistically significant association between significant results and publication. (JAMA. 1992;267:374-378)

838 citations

Journal ArticleDOI
TL;DR: In this paper, the authors review the available research, discuss alternative suggestions for conducting unbiased meta-analysis and suggest some scientific policy measures which could improve the quality of published data in the long term.
Abstract: Publication bias, the phenomenon in which studies with positive results are more likely to be published than studies with negative results, is a serious problem in the interpretation of scientific research. Various hypothetical models have been studied which clarify the potential for bias and highlight characteristics which make a study especially susceptible to bias. Empirical investigations have supported the hypothesis that bias exists and have provided a quantitative assessment of the magnitude of the problem. The use of meta‐analysis as a research tool has focused attention on the issue, since naive methodologies in this area are especially susceptible to bias. In this paper we review the available research, discuss alternative suggestions for conducting unbiased meta‐analysis and suggest some scientific policy measures which could improve the quality of published data in the long term.

744 citations