scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Systematic Review of the Empirical Evidence of Study Publication Bias and Outcome Reporting Bias — An Updated Review

05 Jul 2013-PLOS ONE (Public Library of Science)-Vol. 8, Iss: 7
TL;DR: Direct empirical evidence for the existence of study publication bias and outcome reporting bias is shown and there is strong evidence of an association between significant results and publication; studies that report positive or significant results are more likely to be published and outcomes that are statistically significant have higher odds of being fully reported.
Abstract: Background The increased use of meta-analysis in systematic reviews of healthcare interventions has highlighted several types of bias that can arise during the completion of a randomised controlled trial. Study publication bias and outcome reporting bias have been recognised as a potential threat to the validity of meta-analysis and can make the readily available evidence unreliable for decision making. Methodology/Principal Findings In this update, we review and summarise the evidence from cohort studies that have assessed study publication bias or outcome reporting bias in randomised controlled trials. Twenty studies were eligible of which four were newly identified in this update. Only two followed the cohort all the way through from protocol approval to information regarding publication of outcomes. Fifteen of the studies investigated study publication bias and five investigated outcome reporting bias. Three studies have found that statistically significant outcomes had a higher odds of being fully reported compared to non-significant outcomes (range of odds ratios: 2.2 to 4.7). In comparing trial publications to protocols, we found that 40–62% of studies had at least one primary outcome that was changed, introduced, or omitted. We decided not to undertake meta-analysis due to the differences between studies. Conclusions This update does not change the conclusions of the review in which 16 studies were included. Direct empirical evidence for the existence of study publication bias and outcome reporting bias is shown. There is strong evidence of an association between significant results and publication; studies that report positive or significant results are more likely to be published and outcomes that are statistically significant have higher odds of being fully reported. Publications have been found to be inconsistent with their protocols. Researchers need to be aware of the problems of both types of bias and efforts should be concentrated on improving the reporting of trials.
Citations
More filters
Journal ArticleDOI
TL;DR: A reporting guideline is described, the Preferred Reporting Items for Systematic reviews and Meta-Analyses for Protocols 2015 (PRISMA-P 2015), which consists of a 17-item checklist intended to facilitate the preparation and reporting of a robust protocol for the systematic review.
Abstract: Systematic reviews should build on a protocol that describes the rationale, hypothesis, and planned methods of the review; few reviews report whether a protocol exists. Detailed, well-described protocols can facilitate the understanding and appraisal of the review methods, as well as the detection of modifications to methods and selective reporting in completed reviews. We describe the development of a reporting guideline, the Preferred Reporting Items for Systematic reviews and Meta-Analyses for Protocols 2015 (PRISMA-P 2015). PRISMA-P consists of a 17-item checklist intended to facilitate the preparation and reporting of a robust protocol for the systematic review. Funders and those commissioning reviews might consider mandating the use of the checklist to facilitate the submission of relevant protocol information in funding applications. Similarly, peer reviewers and editors can use the guidance to gauge the completeness and transparency of a systematic review protocol submitted for publication in a journal or other medium.

14,708 citations

Journal ArticleDOI
02 Jan 2015-BMJ
TL;DR: The PRISMA-P checklist as mentioned in this paper provides 17 items considered to be essential and minimum components of a systematic review or meta-analysis protocol, as well as a model example from an existing published protocol.
Abstract: Protocols of systematic reviews and meta-analyses allow for planning and documentation of review methods, act as a guard against arbitrary decision making during review conduct, enable readers to assess for the presence of selective reporting against completed reviews, and, when made publicly available, reduce duplication of efforts and potentially prompt collaboration. Evidence documenting the existence of selective reporting and excessive duplication of reviews on the same or similar topics is accumulating and many calls have been made in support of the documentation and public availability of review protocols. Several efforts have emerged in recent years to rectify these problems, including development of an international register for prospective reviews (PROSPERO) and launch of the first open access journal dedicated to the exclusive publication of systematic review products, including protocols (BioMed Central's Systematic Reviews). Furthering these efforts and building on the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-analyses) guidelines, an international group of experts has created a guideline to improve the transparency, accuracy, completeness, and frequency of documented systematic review and meta-analysis protocols--PRISMA-P (for protocols) 2015. The PRISMA-P checklist contains 17 items considered to be essential and minimum components of a systematic review or meta-analysis protocol.This PRISMA-P 2015 Explanation and Elaboration paper provides readers with a full understanding of and evidence about the necessity of each item as well as a model example from an existing published protocol. This paper should be read together with the PRISMA-P 2015 statement. Systematic review authors and assessors are strongly encouraged to make use of PRISMA-P when drafting and appraising review protocols.

9,361 citations

Journal ArticleDOI
29 Mar 2021-BMJ
TL;DR: The preferred reporting items for systematic reviews and meta-analyses (PRISMA 2020) as mentioned in this paper was developed to facilitate transparent and complete reporting of systematic reviews, and has been updated to reflect recent advances in systematic review methodology and terminology.
Abstract: The methods and results of systematic reviews should be reported in sufficient detail to allow users to assess the trustworthiness and applicability of the review findings. The Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) statement was developed to facilitate transparent and complete reporting of systematic reviews and has been updated (to PRISMA 2020) to reflect recent advances in systematic review methodology and terminology. Here, we present the explanation and elaboration paper for PRISMA 2020, where we explain why reporting of each item is recommended, present bullet points that detail the reporting recommendations, and present examples from published reviews. We hope that changes to the content and structure of PRISMA 2020 will facilitate uptake of the guideline and lead to more transparent, complete, and accurate reporting of systematic reviews.

2,217 citations

Journal ArticleDOI
TL;DR: Misinterpretation and abuse of statistical tests, confidence intervals, and statistical power have been decried for decades, yet remain rampant as discussed by the authors, and there are no interpretations of these concepts that are at once simple, intuitive, correct, and foolproof Instead, correct use and interpretation of these statistics requires an attention to detail which seems to tax the patience of working scientists.
Abstract: Misinterpretation and abuse of statistical tests, confidence intervals, and statistical power have been decried for decades, yet remain rampant A key problem is that there are no interpretations of these concepts that are at once simple, intuitive, correct, and foolproof Instead, correct use and interpretation of these statistics requires an attention to detail which seems to tax the patience of working scientists This high cognitive demand has led to an epidemic of shortcut definitions and interpretations that are simply wrong, sometimes disastrously so-and yet these misinterpretations dominate much of the scientific literature In light of this problem, we provide definitions and a discussion of basic statistics that are more general and critical than typically found in traditional introductory expositions Our goal is to provide a resource for instructors, researchers, and consumers of statistics whose knowledge of statistical theory and technique may be limited but who wish to avoid and spot misinterpretations We emphasize how violation of often unstated analysis protocols (such as selecting analyses for presentation based on the P values they produce) can lead to small P values even if the declared test hypothesis is correct, and can lead to large P values even if that hypothesis is incorrect We then provide an explanatory list of 25 misinterpretations of P values, confidence intervals, and power We conclude with guidelines for improving statistical interpretation and reporting

1,584 citations

References
More filters
Journal ArticleDOI
TL;DR: The presence of publication bias in a cohort of clinical research studies is confirmed and it is suggested that conclusions based only on a review of published data should be interpreted cautiously, especially for observational studies.

2,800 citations


"Systematic Review of the Empirical ..." refers background or methods or result in this paper

  • ...Easterbrook et al [24] found that compared with unfunded studies, government funded studies were more likely to yield statistically significant results but government sponsorship was not found to have a statistically significant effect on the likelihood of publication and company sponsored trials were less likely to be published or presented....

    [...]

  • ...No information other than the study report was available for one empirical study [24] due to its age....

    [...]

  • ...Status of approved protocols for Easterbrook 1991 study [24]....

    [...]

  • ...Easterbrook et al [24] also found that study publication bias was greater with observational and laboratory-based experimental studies (Odds Ratio (OR) 3.79, 95% CI; 1.47, 9.76) than with RCTs (OR 0.84, 95% CI; 0.34, 2.09)....

    [...]

  • ...Easterbrook et al [24] also found that study publication bias was greater with observational and laboratory-based experimental studies (Odds Ratio (OR) 3....

    [...]

Book
30 Nov 2000
TL;DR: The second edition of this best-selling book has been thoroughly revised and expanded to reflect the significant changes and advances made in systematic reviewing.
Abstract: The second edition of this best-selling book has been thoroughly revised and expanded to reflect the significant changes and advances made in systematic reviewing. New features include discussion on the rationale, meta-analyses of prognostic and diagnostic studies and software, and the use of systematic reviews in practice.

2,601 citations

Journal ArticleDOI
TL;DR: A systematic literature search found that among 74 FDA-registered studies, 31%, accounting for 3449 study participants, were not published, and the increase in effect size ranged from 11 to 69% for individual drugs and was 32% overall.
Abstract: Background Evidence-based medicine is valuable to the extent that the evidence base is complete and unbiased. Selective publication of clinical trials — and the outcomes within those trials — can lead to unrealistic estimates of drug effectiveness and alter the apparent risk–benefit ratio. Methods We obtained reviews from the Food and Drug Administration (FDA) for studies of 12 antidepressant agents involving 12,564 patients. We conducted a systematic literature search to identify matching publications. For trials that were reported in the literature, we compared the published outcomes with the FDA outcomes. We also compared the effect size derived from the published reports with the effect size derived from the entire FDA data set. Results Among 74 FDA-registered studies, 31%, accounting for 3449 study participants, were not published. Whether and how the studies were published were associated with the study outcome. A total of 37 studies viewed by the FDA as having positive results were published; 1 stu...

2,176 citations


Additional excerpts

  • ...Several studies investigated a cohort of trials submitted to drug licensing authorities [38,40,41,65] and all found that many of these trials remain unpublished, with one study demonstrating that trials with positive outcomes resulted more often in submission of a final report to the regulatory authority [40]....

    [...]

BookDOI
05 May 2006
TL;DR: This work states that within Conventional Publication Bias: Other Determinants of Data Suppression (Scott D. Halpern and Jesse A. Berlin), differentiating Biases from Genuine Heterogeneity: Distinguishing Artifactual from Substantive Effects (John P.A. Ioannidis).
Abstract: Preface Acknowledgements Notes on Contributors Chapter 1: Publication Bias in Meta-Analysis (Hannah R Rothstein, Alexander J Sutton and Michael Borenstein) Part A: Publication bias in context Chapter 2: Publication Bias: Recognizing the Problem, Understanding Its Origins and Scope, and Preventing Harm (Kay Dickersin) Chapter 3: Preventing Publication Bias: Registries and Prospective Meta-Analysis (Jesse A Berlin and Davina Ghersi) Chapter 4: Grey Literature and Systematic Reviews (Sally Hopewell, Mike Clarke and Sue Mallett) Part B: Statistical methods for assessing publication bias Chapter 5: The Funnel Plot (Jonathan AC Sterne, Betsy Jane Becker and Matthias Egger) Chapter 6: Regression Methods to Detect Publication and Other Bias in Meta-Analysis (Jonathan AC Sterne and Matthias Egger) Chapter 7: Failsafe N or File-Drawer Number (Betsy Jane Becker) Chapter 8: The Trim and Fill Method (Sue Duval) Chapter 9: Selection Method Approaches (Larry V Hedges and Jack Vevea) Chapter 10: Evidence Concerning the Consequences of Publication and Related Biases (Alexander J Sutton) Chapter 11: Software for Publication Bias (Michael Borenstein) Part C: Advanced and emerging approaches Chapter 12: Bias in Meta-Analysis Induced by Incompletely Reported Studies (Alexander J Sutton and Therese D Pigott) Chapter 13: Assessing the Evolution of Effect Sizes over Time (Thomas A Trikalinos and John PA Ioannidis) Chapter 14: Do Systematic Reviews Based on Individual Patient Data Offer a Means of Circumventing Biases Associated with Trial Publications? (Lesley Stewart, Jayne Tierney and Sarah Burdett) Chapter 15: Differentiating Biases from Genuine Heterogeneity: Distinguishing Artifactual from Substantive Effects (John PA Ioannidis) Chapter 16: Beyond Conventional Publication Bias: Other Determinants of Data Suppression (Scott D Halpern and Jesse A Berlin) Appendices Appendix A: Data Sets Appendix B: Annotated Bibliography (Hannah R Rothstein and Ashley Busing) Glossary Index

1,876 citations


"Systematic Review of the Empirical ..." refers background in this paper

  • ...While much effort has been invested in trying to identify the former [2], it is equally important to understand the nature and frequency of missing data from the latter level....

    [...]

  • ...uk Introduction Study publication bias arises when studies are published or not depending on their results; it has received much attention [1,2]....

    [...]

Journal ArticleDOI
26 May 2004-JAMA
TL;DR: The reporting of trial outcomes is not only frequently incomplete but also biased and inconsistent with protocols and Published articles, as well as reviews that incorporate them, may therefore be unreliable and overestimate the benefits of an intervention.
Abstract: ContextSelective reporting of outcomes within published studies based on the nature or direction of their results has been widely suspected, but direct evidence of such bias is currently limited to case reports.ObjectiveTo study empirically the extent and nature of outcome reporting bias in a cohort of randomized trials.DesignCohort study using protocols and published reports of randomized trials approved by the Scientific-Ethical Committees for Copenhagen and Frederiksberg, Denmark, in 1994-1995. The number and characteristics of reported and unreported trial outcomes were recorded from protocols, journal articles, and a survey of trialists. An outcome was considered incompletely reported if insufficient data were presented in the published articles for meta-analysis. Odds ratios relating the completeness of outcome reporting to statistical significance were calculated for each trial and then pooled to provide an overall estimate of bias. Protocols and published articles were also compared to identify discrepancies in primary outcomes.Main Outcome MeasuresCompleteness of reporting of efficacy and harm outcomes and of statistically significant vs nonsignificant outcomes; consistency between primary outcomes defined in the most recent protocols and those defined in published articles.ResultsOne hundred two trials with 122 published journal articles and 3736 outcomes were identified. Overall, 50% of efficacy and 65% of harm outcomes per trial were incompletely reported. Statistically significant outcomes had a higher odds of being fully reported compared with nonsignificant outcomes for both efficacy (pooled odds ratio, 2.4; 95% confidence interval [CI], 1.4-4.0) and harm (pooled odds ratio, 4.7; 95% CI, 1.8-12.0) data. In comparing published articles with protocols, 62% of trials had at least 1 primary outcome that was changed, introduced, or omitted. Eighty-six percent of survey responders (42/49) denied the existence of unreported outcomes despite clear evidence to the contrary.ConclusionsThe reporting of trial outcomes is not only frequently incomplete but also biased and inconsistent with protocols. Published articles, as well as reviews that incorporate them, may therefore be unreliable and overestimate the benefits of an intervention. To ensure transparency, planned trials should be registered and protocols should be made publicly available prior to trial completion.

1,638 citations


"Systematic Review of the Empirical ..." refers background or methods in this paper

  • ...The main objective of the study by Blumle et al [31] was to consider how eligibility criteria stated in protocols was reported in subsequent reports, in doing so they noted that 52% of studies in their cohort were published, decreasing to 48% for RCTs only....

    [...]

  • ...Four cohorts included only RCTs [17,18,25,29]; in the remaining cohort [26] the proportion of included RCTs was 13%....

    [...]

  • ...Those studies containing exclusively nonRCTs were excluded....

    [...]

  • ...Four of the empirical studies [17,25,26,29] assessed protocols approved by ethics committees and one empirical study [18] assessed those approved by a health institute....

    [...]

  • ...Both cohorts containing exclusively RCTs or containing a mix of RCTs and non-RCTs were eligible....

    [...]