scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Does the inclusion of grey literature influence estimates of intervention effectiveness reported in meta-analyses?

07 Oct 2000-The Lancet (Elsevier)-Vol. 356, Iss: 9237, pp 1228-1231
TL;DR: Whether exclusion of grey literature, compared with its inclusion in meta-analysis, provides different estimates of the effectiveness of interventions assessed in randomised trials is examined.
About: This article is published in The Lancet.The article was published on 2000-10-07. It has received 709 citations till now. The article focuses on the topics: Grey literature.
Citations
More filters
Book
23 Sep 2019
TL;DR: The Cochrane Handbook for Systematic Reviews of Interventions is the official document that describes in detail the process of preparing and maintaining Cochrane systematic reviews on the effects of healthcare interventions.
Abstract: The Cochrane Handbook for Systematic Reviews of Interventions is the official document that describes in detail the process of preparing and maintaining Cochrane systematic reviews on the effects of healthcare interventions.

21,235 citations

Journal ArticleDOI
TL;DR: A measurement tool for the 'assessment of multiple systematic reviews' (AMSTAR) was developed that consists of 11 items and has good face and content validity for measuring the methodological quality of systematic reviews.
Abstract: Our objective was to develop an instrument to assess the methodological quality of systematic reviews, building upon previous tools, empirical evidence and expert consensus. A 37-item assessment tool was formed by combining 1) the enhanced Overview Quality Assessment Questionnaire (OQAQ), 2) a checklist created by Sacks, and 3) three additional items recently judged to be of methodological importance. This tool was applied to 99 paper-based and 52 electronic systematic reviews. Exploratory factor analysis was used to identify underlying components. The results were considered by methodological experts using a nominal group technique aimed at item reduction and design of an assessment tool with face and content validity. The factor analysis identified 11 components. From each component, one item was selected by the nominal group. The resulting instrument was judged to have face and content validity. A measurement tool for the 'assessment of multiple systematic reviews' (AMSTAR) was developed. The tool consists of 11 items and has good face and content validity for measuring the methodological quality of systematic reviews. Additional studies are needed with a focus on the reproducibility and construct validity of AMSTAR, before strong recommendations can be made on its use.

3,583 citations


Cites background from "Does the inclusion of grey literatu..."

  • ...The importance of including grey literature in all systematic reviews has been discussed [21]....

    [...]

Journal ArticleDOI
TL;DR: The inability of case-mix adjustment methods to compensate for selection bias and the inability to identify non- randomised studies that are free of selection bias indicate that non-randomised studies should only be undertaken when RCTs are infeasible or unethical.
Abstract: OBJECTIVES: To consider methods and related evidence for evaluating bias in non-randomised intervention studies. DATA SOURCES: Systematic reviews and methodological papers were identified from a search of electronic databases; handsearches of key medical journals and contact with experts working in the field. New empirical studies were conducted using data from two large randomised clinical trials. METHODS: Three systematic reviews and new empirical investigations were conducted. The reviews considered, in regard to non-randomised studies, (1) the existing evidence of bias, (2) the content of quality assessment tools, (3) the ways that study quality has been assessed and addressed. (4) The empirical investigations were conducted generating non-randomised studies from two large, multicentre randomised controlled trials (RCTs) and selectively resampling trial participants according to allocated treatment, centre and period. RESULTS: In the systematic reviews, eight studies compared results of randomised and non-randomised studies across multiple interventions using meta-epidemiological techniques. A total of 194 tools were identified that could be or had been used to assess non-randomised studies. Sixty tools covered at least five of six pre-specified internal validity domains. Fourteen tools covered three of four core items of particular importance for non-randomised studies. Six tools were thought suitable for use in systematic reviews. Of 511 systematic reviews that included non-randomised studies, only 169 (33%) assessed study quality. Sixty-nine reviews investigated the impact of quality on study results in a quantitative manner. The new empirical studies estimated the bias associated with non-random allocation and found that the bias could lead to consistent over- or underestimations of treatment effects, also the bias increased variation in results for both historical and concurrent controls, owing to haphazard differences in case-mix between groups. The biases were large enough to lead studies falsely to conclude significant findings of benefit or harm. Four strategies for case-mix adjustment were evaluated: none adequately adjusted for bias in historically and concurrently controlled studies. Logistic regression on average increased bias. Propensity score methods performed better, but were not satisfactory in most situations. Detailed investigation revealed that adequate adjustment can only be achieved in the unrealistic situation when selection depends on a single factor. CONCLUSIONS: Results of non-randomised studies sometimes, but not always, differ from results of randomised studies of the same intervention. Non-randomised studies may still give seriously misleading results when treated and control groups appear similar in key prognostic factors. Standard methods of case-mix adjustment do not guarantee removal of bias. Residual confounding may be high even when good prognostic data are available, and in some situations adjusted results may appear more biased than unadjusted results. Although many quality assessment tools exist and have been used for appraising non-randomised studies, most omit key quality domains. Healthcare policies based upon non-randomised studies or systematic reviews of non-randomised studies may need re-evaluation if the uncertainty in the true evidence base was not fully appreciated when policies were made. The inability of case-mix adjustment methods to compensate for selection bias and our inability to identify non-randomised studies that are free of selection bias indicate that non-randomised studies should only be undertaken when RCTs are infeasible or unethical. Recommendations for further research include: applying the resampling methodology in other clinical areas to ascertain whether the biases described are typical; developing or refining existing quality assessment tools for non-randomised studies; investigating how quality assessments of non-randomised studies can be incorporated into reviews and the implications of individual quality features for interpretation of a review's results; examination of the reasons for the apparent failure of case-mix adjustment methods; and further evaluation of the role of the propensity score.

2,651 citations

Journal ArticleDOI
13 Mar 2008-BMJ
TL;DR: The average bias associated with defects in the conduct of randomised trials varies with the type of outcome, andSystematic reviewers should routinely assess the risk of bias in the results of trials, and should report meta-analyses restricted to trials at low risk of biases.
Abstract: OBJECTIVE: To examine whether the association of inadequate or unclear allocation concealment and lack of blinding with biased estimates of intervention effects varies with the nature of the intervention or outcome. DESIGN: Combined analysis of data from three meta-epidemiological studies based on collections of meta-analyses. DATA SOURCES: 146 meta-analyses including 1346 trials examining a wide range of interventions and outcomes. MAIN OUTCOME MEASURES: Ratios of odds ratios quantifying the degree of bias associated with inadequate or unclear allocation concealment, and lack of blinding, for trials with different types of intervention and outcome. A ratio of odds ratios <1 implies that inadequately concealed or non-blinded trials exaggerate intervention effect estimates. RESULTS: In trials with subjective outcomes effect estimates were exaggerated when there was inadequate or unclear allocation concealment (ratio of odds ratios 0.69 (95% CI 0.59 to 0.82)) or lack of blinding (0.75 (0.61 to 0.93)). In contrast, there was little evidence of bias in trials with objective outcomes: ratios of odds ratios 0.91 (0.80 to 1.03) for inadequate or unclear allocation concealment and 1.01 (0.92 to 1.10) for lack of blinding. There was little evidence for a difference between trials of drug and non-drug interventions. Except for trials with all cause mortality as the outcome, the magnitude of bias varied between meta-analyses. CONCLUSIONS: The average bias associated with defects in the conduct of randomised trials varies with the type of outcome. Systematic reviewers should routinely assess the risk of bias in the results of trials, and should report meta-analyses restricted to trials at low risk of bias either as the primary analysis or in conjunction with less restrictive analyses.

2,093 citations

Journal ArticleDOI
TL;DR: In this paper, a statistical meta-analysis was performed with the aim of evaluating the relationship between biochar and crop productivity (either yield or above-ground biomass) with an overall small, but statistically significant, benefit of biochar application to soils on crop productivity, with a grand mean increase of 10%.

1,762 citations

References
More filters
Journal ArticleDOI
01 Feb 1995-JAMA
TL;DR: Empirical evidence is provided that inadequate methodological approaches in controlled trials, particularly those representing poor allocation concealment, are associated with bias.
Abstract: Objective. —To determine if inadequate approaches to randomized controlled trial design and execution are associated with evidence of bias in estimating treatment effects. Design. —An observational study in which we assessed the methodological quality of 250 controlled trials from 33 meta-analyses and then analyzed, using multiple logistic regression models, the associations between those assessments and estimated treatment effects. Data Sources. —Meta-analyses from the Cochrane Pregnancy and Childbirth Database. Main Outcome Measures. —The associations between estimates of treatment effects and inadequate allocation concealment, exclusions after randomization, and lack of double-blinding. Results. —Compared with trials in which authors reported adequately concealed treatment allocation, trials in which concealment was either inadequate or unclear (did not report or incompletely reported a concealment approach) yielded larger estimates of treatment effects ( P P =.01), with odds ratios being exaggerated by 17%. Conclusions. —This study provides empirical evidence that inadequate methodological approaches in controlled trials, particularly those representing poor allocation concealment, are associated with bias. Readers of trial reports should be wary of these pitfalls, and investigators must improve their design, execution, and reporting of trials. ( JAMA . 1995;273:408-412)

5,765 citations

Journal ArticleDOI
TL;DR: Study of low methodological quality in which the estimate of quality is incorporated into the meta-analyses can alter the interpretation of the benefit of intervention, whether a scale or component approach is used in the assessment of trial quality.

3,129 citations

Journal ArticleDOI
TL;DR: The presence of publication bias in a cohort of clinical research studies is confirmed and it is suggested that conclusions based only on a review of published data should be interpreted cautiously, especially for observational studies.

2,800 citations

Journal ArticleDOI
13 Sep 1997-BMJ
TL;DR: The study results support the need for prospective registration of clinical research projects to avoid publication bias and also support restricting the selection of trials to those started before a common date in undertaking systematic reviews.
Abstract: Objectives: To determine the extent to which publication is influenced by study outcome. Design: A cohort of studies submitted to a hospital ethics committee over 10 years were examined retrospectively by reviewing the protocols and by questionnaire. The primary method of analysis was Cox9s proportional hazards model. Setting: University hospital, Sydney, Australia. Studies: 748 eligible studies submitted to Royal Prince Alfred Hospital Ethics Committee between 1979 and 1988. Main outcome measures: Time to publication. Results: Response to the questionnaire was received for 520 (70%) of the eligible studies. Of the 218 studies analysed with tests of significance, those with positive results (P v 8.0 years). This finding was even stronger for the group of 130 clinical trials (hazard ratio 3.13 (1.76 to 5.58), P=0.0001), with median times to publication of 4.7 and 8.0 years respectively. These results were not materially changed after adjusting for other significant predictors of publication. Studies with indefinite conclusions (0.05 P Conclusions: This study confirms the evidence of publication bias found in other studies and identifies delay in publication as an additional important factor. The study results support the need for prospective registration of trials to avoid publication bias and also support restricting the selection of trials to those started before a common date in undertaking systematic reviews. Key messages This retrospective cohort study of clinical research projects confirms the findings of publication bias found in previous studies Delay in the publication of studies with negative results has been identified as an additional important factor in publication bias With the recognised importance of evidence based medicine, these results have important implications for the selection of studies included in systematic reviews Prospective registration of clinical research projects will avoid many of the problems associated with publication bias However, it is also important to restrict inclusion in systematic reviews to studies started before a certain date to allow for the delay in completing studies with negative results

779 citations

Journal ArticleDOI
13 Jul 1994-JAMA
TL;DR: The pattern over time in the level of statistical power and the reporting of sample size calculations in published randomized controlled trials (RCTs) with negative results is described and few trials discussed whether the observed differences were clinically important.
Abstract: Objective. —To describe the pattern over time in the level of statistical power and the reporting of sample size calculations in published randomized controlled trials (RCTs) with negative results. Design. —Our study was a descriptive survey. Power to detect 25% and 50% relative differences was calculated for the subset of trials with negative results in which a simple two-group parallel design was used. Criteria were developed both to classify trial results as positive or negative and to identify the primary outcomes. Power calculations were based on results from the primary outcomes reported in the trials. Population. —We reviewed all 383 RCTs published inJAMA, Lancet, and theNew England Journal of Medicinein 1975, 1980, 1985, and 1990. Results. —Twenty-seven percent of the 383 RCTs (n=102) were classified as having negative results. The number of published RCTs more than doubled from 1975 to 1990, with the proportion of trials with negative results remaining fairly stable. Of the simple two-group parallel design trials having negative results with dichotomous or continuous primary outcomes (n=70), only 16% and 36% had sufficient statistical power (80%) to detect a 25% or 50% relative difference, respectively. These percentages did not consistently increase overtime. Overall, only 32% of the trials with negative results reported sample size calculations, but the percentage doing so has improved over time from 0% in 1975 to 43% in 1990. Only 20 of the 102 reports made any statement related to the clinical significance of the observed differences. Conclusions. —Most trials with negative results did not have large enough sample sizes to detect a 25% or a 50% relative difference. This result has not changed over time. Few trials discussed whether the observed differences were clinically important. There are important reasons to change this practice. The reporting of statistical power and sample size also needs to be improved. (JAMA. 1994;272:122-124)

544 citations