scispace - formally typeset
Search or ask a question
Author

Deborah M Caldwell

Other affiliations: King's College London
Bio: Deborah M Caldwell is an academic researcher from University of Bristol. The author has contributed to research in topics: Meta-analysis & Psychological intervention. The author has an hindex of 39, co-authored 113 publications receiving 10623 citations. Previous affiliations of Deborah M Caldwell include King's College London.


Papers
More filters
Journal ArticleDOI
TL;DR: The process of developing specific advice for the reporting of systematic reviews that incorporate network meta-analyses is described, and the guidance generated from this process is presented.
Abstract: The PRISMA statement is a reporting guideline designed to improve the completeness of reporting of systematic reviews and meta-analyses. Authors have used this guideline worldwide to prepare their reviews for publication. In the past, these reports typically compared 2 treatment alternatives. With the evolution of systematic reviews that compare multiple treatments, some of them only indirectly, authors face novel challenges for conducting and reporting their reviews. This extension of the PRISMA (Preferred Reporting Items for Systematic Reviews and Metaanalyses) statement was developed specifically to improve the reporting of systematic reviews incorporating network meta-analyses.

3,932 citations

Journal ArticleDOI
TL;DR: A hierarchical Bayesian approach to MTC implemented using WinBUGS and R is taken and it is shown that both methods are useful in identifying potential inconsistencies in different types of network and that they illustrate how the direct and indirect evidence combine to produce the posterior MTC estimates of relative treatment effects.
Abstract: Pooling of direct and indirect evidence from randomized trials, known as mixed treatment comparisons (MTC), is becoming increasingly common in the clinical literature. MTC allows coherent judgements on which of the several treatments is the most effective and produces estimates of the relative effects of each treatment compared with every other treatment in a network.We introduce two methods for checking consistency of direct and indirect evidence. The first method (back-calculation) infers the contribution of indirect evidence from the direct evidence and the output of an MTC analysis and is useful when the only available data consist of pooled summaries of the pairwise contrasts. The second more general, but computationally intensive, method is based on 'node-splitting' which separates evidence on a particular comparison (node) into 'direct' and 'indirect' and can be applied to networks where trial-level data are available. Methods are illustrated with examples from the literature. We take a hierarchical Bayesian approach to MTC implemented using WinBUGS and R.We show that both methods are useful in identifying potential inconsistencies in different types of network and that they illustrate how the direct and indirect evidence combine to produce the posterior MTC estimates of relative treatment effects. This allows users to understand how MTC synthesis is pooling the data, and what is 'driving' the final estimates.We end with some considerations on the modelling assumptions being made, the problems with the extension of the back-calculation method to trial-level data and discuss our methods in the context of the existing literature.

1,559 citations

Journal ArticleDOI
13 Oct 2005-BMJ
TL;DR: Comparisons of three or more treatments, based on pair-wise or multi-arm comparative studies, as a multiple treatment comparison evidence structure to determine the best treatment are described.
Abstract: How can policy makers decide which of five treatments is the best? Standard meta-analysis provides little help but evidence based decisions are possible

1,377 citations

Journal ArticleDOI
TL;DR: RIS is the first rigorously developed tool designed specifically to assess the risk of bias in systematic reviews, and is currently aimed at four broad categories of reviews mainly within health care settings.

1,083 citations

Journal ArticleDOI
03 Jul 2014-PLOS ONE
TL;DR: An approach to determining confidence in the output of a network meta-analysis is proposed based on methodology developed by the Grading of Recommendations Assessment, Development and Evaluation Working Group for pairwise meta-analyses and applied to a systematic review comparing topical antibiotics without steroids for chronically discharging ears with underlying eardrum perforations.
Abstract: Systematic reviews that collate data about the relative effects of multiple interventions via network meta-analysis are highly informative for decision-making purposes. A network meta-analysis provides two types of findings for a specific outcome: the relative treatment effect for all pairwise comparisons, and a ranking of the treatments. It is important to consider the confidence with which these two types of results can enable clinicians, policy makers and patients to make informed decisions. We propose an approach to determining confidence in the output of a network meta-analysis. Our proposed approach is based on methodology developed by the Grading of Recommendations Assessment, Development and Evaluation (GRADE) Working Group for pairwise meta-analyses. The suggested framework for evaluating a network meta-analysis acknowledges (i) the key role of indirect comparisons (ii) the contributions of each piece of direct evidence to the network meta-analysis estimates of effect size; (iii) the importance of the transitivity assumption to the validity of network meta-analysis; and (iv) the possibility of disagreement between direct evidence and indirect evidence. We apply our proposed strategy to a systematic review comparing topical antibiotics without steroids for chronically discharging ears with underlying eardrum perforations. The proposed framework can be used to determine confidence in the results from a network meta-analysis. Judgements about evidence from a network meta-analysis can be different from those made about evidence from pairwise meta-analyses.

853 citations


Cited by
More filters
Book
23 Sep 2019
TL;DR: The Cochrane Handbook for Systematic Reviews of Interventions is the official document that describes in detail the process of preparing and maintaining Cochrane systematic reviews on the effects of healthcare interventions.
Abstract: The Cochrane Handbook for Systematic Reviews of Interventions is the official document that describes in detail the process of preparing and maintaining Cochrane systematic reviews on the effects of healthcare interventions.

21,235 citations

Journal ArticleDOI
29 Mar 2021-BMJ
TL;DR: The preferred reporting items for systematic reviews and meta-analyses (PRISMA) statement as discussed by the authors was designed to help systematic reviewers transparently report why the review was done, what the authors did, and what they found.
Abstract: The Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) statement, published in 2009, was designed to help systematic reviewers transparently report why the review was done, what the authors did, and what they found. Over the past decade, advances in systematic review methodology and terminology have necessitated an update to the guideline. The PRISMA 2020 statement replaces the 2009 statement and includes new reporting guidance that reflects advances in methods to identify, select, appraise, and synthesise studies. The structure and presentation of the items have been modified to facilitate implementation. In this article, we present the PRISMA 2020 27-item checklist, an expanded checklist that details reporting recommendations for each item, the PRISMA 2020 abstract checklist, and the revised flow diagrams for original and updated reviews.

16,613 citations

Journal ArticleDOI
12 Oct 2016-BMJ
TL;DR: Risk of Bias In Non-randomised Studies - of Interventions is developed, a new tool for evaluating risk of bias in estimates of the comparative effectiveness of interventions from studies that did not use randomisation to allocate units or clusters of individuals to comparison groups.
Abstract: Non-randomised studies of the effects of interventions are critical to many areas of healthcare evaluation, but their results may be biased. It is therefore important to understand and appraise their strengths and weaknesses. We developed ROBINS-I (“Risk Of Bias In Non-randomised Studies - of Interventions”), a new tool for evaluating risk of bias in estimates of the comparative effectiveness (harm or benefit) of interventions from studies that did not use randomisation to allocate units (individuals or clusters of individuals) to comparison groups. The tool will be particularly useful to those undertaking systematic reviews that include non-randomised studies.

8,028 citations

Journal ArticleDOI
TL;DR: The content of these European Society of Cardiology (ESC) Guidelines has been published for personal and educational use only and no commercial use is authorized.
Abstract: Supplementary Table 9, column 'Edoxaban', row 'eGFR category', '95 mL/min' (page 15). The cell should be coloured green instead of yellow. It should also read "60 mg"instead of "60 mg (use with caution in 'supranormal' renal function)."In the above-indicated cell, a footnote has also been added to state: "Edoxaban should be used in patients with high creatinine clearance only after a careful evaluation of the individual thromboembolic and bleeding risk."Supplementary Table 9, column 'Edoxaban', row 'Dose reduction in selected patients' (page 16). The cell should read "Edoxaban 60 mg reduced to 30 mg once daily if any of the following: creatinine clearance 15-50 mL/min, body weight <60 kg, concomitant use of dronedarone, erythromycin, ciclosporine or ketokonazole"instead of "Edoxaban 60 mg reduced to 30 mg once daily, and edoxaban 30 mg reduced to 15mg once daily, if any of the following: creatinine clearance of 30-50 mL/min, body weight <60 kg, concomitant us of verapamil or quinidine or dronedarone."

4,285 citations

Journal ArticleDOI
21 Sep 2017-BMJ
TL;DR: This paper reports on the updating of AMSTAR and its adaptation to enable more detailed assessment of systematic reviews that include randomised or non-randomised studies of healthcare interventions, or both.
Abstract: The number of published systematic reviews of studies of healthcare interventions has increased rapidly and these are used extensively for clinical and policy decisions. Systematic reviews are subject to a range of biases and increasingly include non-randomised studies of interventions. It is important that users can distinguish high quality reviews. Many instruments have been designed to evaluate different aspects of reviews, but there are few comprehensive critical appraisal instruments. AMSTAR was developed to evaluate systematic reviews of randomised trials. In this paper, we report on the updating of AMSTAR and its adaptation to enable more detailed assessment of systematic reviews that include randomised or non-randomised studies of healthcare interventions, or both. With moves to base more decisions on real world observational evidence we believe that AMSTAR 2 will assist decision makers in the identification of high quality systematic reviews, including those based on non-randomised studies of healthcare interventions.

4,208 citations