scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Assessing the quality of reports of randomized clinical trials : is blinding necessary?

01 Feb 1996-Controlled Clinical Trials (Elsevier)-Vol. 17, Iss: 1, pp 1-12
TL;DR: An instrument to assess the quality of reports of randomized clinical trials (RCTs) in pain research is described and its use to determine the effect of rater blinding on the assessments of quality is described.
About: This article is published in Controlled Clinical Trials.The article was published on 1996-02-01. It has received 15740 citations till now. The article focuses on the topics: Blinding & Jadad scale.
Citations
More filters
Journal ArticleDOI
TL;DR: In this review the usual methods applied in systematic reviews and meta-analyses are outlined, and the most common procedures for combining studies with binary outcomes are described, illustrating how they can be done using Stata commands.

31,656 citations

Journal ArticleDOI
TL;DR: An Explanation and Elaboration of the PRISMA Statement is presented and updated guidelines for the reporting of systematic reviews and meta-analyses are presented.
Abstract: Systematic reviews and meta-analyses are essential to summarize evidence relating to efficacy and safety of health care interventions accurately and reliably. The clarity and transparency of these reports, however, is not optimal. Poor reporting of systematic reviews diminishes their value to clinicians, policy makers, and other users. Since the development of the QUOROM (QUality Of Reporting Of Meta-analysis) Statement—a reporting guideline published in 1999—there have been several conceptual, methodological, and practical advances regarding the conduct and reporting of systematic reviews and meta-analyses. Also, reviews of published systematic reviews have found that key information about these studies is often poorly reported. Realizing these issues, an international group that included experienced authors and methodologists developed PRISMA (Preferred Reporting Items for Systematic reviews and Meta-Analyses) as an evolution of the original QUOROM guideline for systematic reviews and meta-analyses of evaluations of health care interventions. The PRISMA Statement consists of a 27-item checklist and a four-phase flow diagram. The checklist includes items deemed essential for transparent reporting of a systematic review. In this Explanation and Elaboration document, we explain the meaning and rationale for each checklist item. For each item, we include an example of good reporting and, where possible, references to relevant empirical studies and methodological literature. The PRISMA Statement, this document, and the associated Web site (http://www.prisma-statement.org/) should be helpful resources to improve reporting of systematic reviews and meta-analyses.

25,711 citations

Book
23 Sep 2019
TL;DR: The Cochrane Handbook for Systematic Reviews of Interventions is the official document that describes in detail the process of preparing and maintaining Cochrane systematic reviews on the effects of healthcare interventions.
Abstract: The Cochrane Handbook for Systematic Reviews of Interventions is the official document that describes in detail the process of preparing and maintaining Cochrane systematic reviews on the effects of healthcare interventions.

21,235 citations

Journal ArticleDOI
21 Jul 2009-BMJ
TL;DR: The meaning and rationale for each checklist item is explained, and an example of good reporting is included and, where possible, references to relevant empirical studies and methodological literature are included.
Abstract: Systematic reviews and meta-analyses are essential to summarise evidence relating to efficacy and safety of healthcare interventions accurately and reliably. The clarity and transparency of these reports, however, are not optimal. Poor reporting of systematic reviews diminishes their value to clinicians, policy makers, and other users. Since the development of the QUOROM (quality of reporting of meta-analysis) statement—a reporting guideline published in 1999—there have been several conceptual, methodological, and practical advances regarding the conduct and reporting of systematic reviews and meta-analyses. Also, reviews of published systematic reviews have found that key information about these studies is often poorly reported. Realising these issues, an international group that included experienced authors and methodologists developed PRISMA (preferred reporting items for systematic reviews and meta-analyses) as an evolution of the original QUOROM guideline for systematic reviews and meta-analyses of evaluations of health care interventions. The PRISMA statement consists of a 27-item checklist and a four-phase flow diagram. The checklist includes items deemed essential for transparent reporting of a systematic review. In this explanation and elaboration document, we explain the meaning and rationale for each checklist item. For each item, we include an example of good reporting and, where possible, references to relevant empirical studies and methodological literature. The PRISMA statement, this document, and the associated website (www.prisma-statement.org/) should be helpful resources to improve reporting of systematic reviews and meta-analyses.

13,813 citations

Journal ArticleDOI
02 Jan 2015-BMJ
TL;DR: The PRISMA-P checklist as mentioned in this paper provides 17 items considered to be essential and minimum components of a systematic review or meta-analysis protocol, as well as a model example from an existing published protocol.
Abstract: Protocols of systematic reviews and meta-analyses allow for planning and documentation of review methods, act as a guard against arbitrary decision making during review conduct, enable readers to assess for the presence of selective reporting against completed reviews, and, when made publicly available, reduce duplication of efforts and potentially prompt collaboration. Evidence documenting the existence of selective reporting and excessive duplication of reviews on the same or similar topics is accumulating and many calls have been made in support of the documentation and public availability of review protocols. Several efforts have emerged in recent years to rectify these problems, including development of an international register for prospective reviews (PROSPERO) and launch of the first open access journal dedicated to the exclusive publication of systematic review products, including protocols (BioMed Central's Systematic Reviews). Furthering these efforts and building on the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-analyses) guidelines, an international group of experts has created a guideline to improve the transparency, accuracy, completeness, and frequency of documented systematic review and meta-analysis protocols--PRISMA-P (for protocols) 2015. The PRISMA-P checklist contains 17 items considered to be essential and minimum components of a systematic review or meta-analysis protocol.This PRISMA-P 2015 Explanation and Elaboration paper provides readers with a full understanding of and evidence about the necessity of each item as well as a model example from an existing published protocol. This paper should be read together with the PRISMA-P 2015 statement. Systematic review authors and assessors are strongly encouraged to make use of PRISMA-P when drafting and appraising review protocols.

9,361 citations

References
More filters
Journal ArticleDOI
TL;DR: In this article, the authors present guidelines for choosing among six different forms of the intraclass correlation for reliability studies in which n target are rated by k judges, and the confidence intervals for each of the forms are reviewed.
Abstract: Reliability coefficients often take the form of intraclass correlation coefficients. In this article, guidelines are given for choosing among six different forms of the intraclass correlation for reliability studies in which n target are rated by k judges. Relevant to the choice of the coefficient are the appropriate statistical model for the reliability and the application to be made of the reliability results. Confidence intervals for each of the forms are reviewed.

21,185 citations

Book
07 Dec 1989
TL;DR: In this article, the authors propose three basic concepts: devising the items, selecting the items and selecting the responses, from items to scales, reliability and validity of the responses.
Abstract: 1. Introduction 2. Basic concepts 3. Devising the items 4. Scaling responses 5. Selecting the items 6. Biases in responding 7. From items to scales 8. Reliability 9. Generalizability theory 10. Validity 11. Measuring change 12. Item response theory 13. Methods of administration 14. Ethical considerations 15. Reporting test results Appendices

9,316 citations

Journal ArticleDOI
01 Feb 1995-JAMA
TL;DR: Empirical evidence is provided that inadequate methodological approaches in controlled trials, particularly those representing poor allocation concealment, are associated with bias.
Abstract: Objective. —To determine if inadequate approaches to randomized controlled trial design and execution are associated with evidence of bias in estimating treatment effects. Design. —An observational study in which we assessed the methodological quality of 250 controlled trials from 33 meta-analyses and then analyzed, using multiple logistic regression models, the associations between those assessments and estimated treatment effects. Data Sources. —Meta-analyses from the Cochrane Pregnancy and Childbirth Database. Main Outcome Measures. —The associations between estimates of treatment effects and inadequate allocation concealment, exclusions after randomization, and lack of double-blinding. Results. —Compared with trials in which authors reported adequately concealed treatment allocation, trials in which concealment was either inadequate or unclear (did not report or incompletely reported a concealment approach) yielded larger estimates of treatment effects ( P P =.01), with odds ratios being exaggerated by 17%. Conclusions. —This study provides empirical evidence that inadequate methodological approaches in controlled trials, particularly those representing poor allocation concealment, are associated with bias. Readers of trial reports should be wary of these pitfalls, and investigators must improve their design, execution, and reporting of trials. ( JAMA . 1995;273:408-412)

5,765 citations

Journal ArticleDOI
TL;DR: The characteristics of several major methods (Delphi, Nominal Group, and models developed by the National Institutes of Health and Glaser) are surveyed and guidelines for those who want to use the techniques are provided.
Abstract: Consensus methods are being used increasingly to solve problems in medicine and health. Their main purpose is to define levels of agreement on controversial subjects. Advocates suggest that, when properly employed, consensus strategies can create structured environments in which experts are given the best available information, allowing their solutions to problems to be more justifiable and credible than otherwise. This paper surveys the characteristics of several major methods (Delphi, Nominal Group, and models developed by the National Institutes of Health and Glaser) and provides guidelines for those who want to use the techniques. Among the concerns these guidelines address are selecting problems, choosing members for consensus panels, specifying acceptable levels of agreement, properly using empirical data, obtaining professional and political support, and disseminating results.

1,825 citations

Journal ArticleDOI
TL;DR: A reasonable standard design and conduct of trials will facilitate the interpretation of those with conflicting results and help in making valid combinations of undersized trials.

1,364 citations


Additional excerpts

  • ...Random allocation Blinding Clear/validated outcomes Description of withdrawals and dropouts Clear hypothesis and objectives Clear inclusion/exclusion criteria Power calculation Appropriate size Intention to treat Single observer Adequate follow-up Negative/positive controls Controlled cointerventions Appropriate analysis Randomization method explained Description of investigators and assessors Description of interventions Raw data available Compliance check Adverse effects documented clearly Comparable groups Clinical relevance Protocol is followed Informed consent Adequate analysis Appropriate outcome measures Data supporting conclusions Paper clear and simple to understand Ethical approval Appropriate study Independent study Overall impression Prospective study More than 1 assessment time Attempt to demonstrate dose response with new agents Appropriate duration of study Description of selection method Definition of method to record adverse effects Definition of methods for adverse effect management Objective outcome measurements Avoidance of data unrelated to the question addressed Representative sample Statistics, central tendency, and dispersion measures reported Blinding testing Results of randomization reported Analysis of impact of withdrawals Clear tables Clear figures Clear retrospective analysis (5) (5) (5) (5) (4) (4) (4) (3) (3) (3) (3) (3) (3) (3) (2) (2) (2) (2) (2) (2) (2) (1) (1) (1) (1) (1) (1) (1) (1) (1) (1) (1) (1) (1) (1) (1) (1) (1) (1) 0) (1) (1) (1) 0) (1) (1) 0) 0) (1)...

    [...]