scispace - formally typeset
Search or ask a question
Journal ArticleDOI

RAMESES publication standards: realist syntheses

TL;DR: This project used multiple sources to develop and draw together evidence and expertise in realist synthesis and hopes that these standards will act as a resource that will contribute to improving the reporting of realist syntheses.
Abstract: Background: There is growing interest in realist synthesis as an alternative systematic review method. This approach offers the potential to expand the knowledge base in policy-relevant areas - for example, by explaining the success, failure or mixed fortunes of complex interventions. No previous publication standards exist for reporting realist syntheses. This standard was developed as part of the RAMESES (Realist And MEta-narrative Evidence Syntheses: Evolving Standards) project. The project’s aim is to produce preliminary publication standards for realist systematic reviews. Methods: We (a) collated and summarized existing literature on the principles of good practice in realist syntheses; (b) considered the extent to which these principles had been followed by published syntheses, thereby identifying how rigor may be lost and how existing methods could be improved; (c) used a three-round online Delphi method with an interdisciplinary panel of national and international experts in evidence synthesis, realist research, policy and/or publishing to produce and iteratively refine a draft set of methodological steps and publication standards; (d) provided real-time support to ongoing realist syntheses and the open-access RAMESES online discussion list so as to capture problems and questions as they arose; and (e) synthesized expert input, evidence syntheses and real-time problem analysis into a definitive set of standards. Results: We identified 35 published realist syntheses, provided real-time support to 9 on-going syntheses and captured questions raised in the RAMESES discussion list. Through analysis and discussion within the project team, we summarized the published literature and common questions and challenges into briefing materials for the Delphi panel, comprising 37 members. Within three rounds this panel had reached consensus on 19 key publication standards, with an overall response rate of 91%. Conclusion: This project used multiple sources to develop and draw together evidence and expertise in realist synthesis. For each item we have included an explanation for why it is important and guidance on how it might be reported. Realist synthesis is a relatively new method for evidence synthesis and as experience and methodological developments occur, we anticipate that these standards will evolve to reflect further methodological developments. We hope that these standards will act as a resource that will contribute to improving the reporting of realist syntheses. To encourage dissemination of the RAMESES publication standards, this article is co-published in the Journal of Advanced Nursing and is freely accessible on Wiley Online Library (http://www.wileyonlinelibrary.com/journal/jan). Please see related article http://www.biomedcentral.com/1741-7015/11/20 and http://www.biomedcentral.com/ 1741-7015/11/22

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
TL;DR: It is argued that disaggregating the concept of mechanism into its constituent parts helps to understand the difference between the resources offered by the intervention and the ways in which this changes the reasoning of participants and underline the importance of conceptualising mechanisms as operating on a continuum, rather than as an ‘on/off’ switch.
Abstract: The idea that underlying, generative mechanisms give rise to causal regularities has become a guiding principle across many social and natural science disciplines. A specific form of this enquiry, realist evaluation is gaining momentum in the evaluation of complex social interventions. It focuses on ‘what works, how, in which conditions and for whom’ using context, mechanism and outcome configurations as opposed to asking whether an intervention ‘works’. Realist evaluation can be difficult to codify and requires considerable researcher reflection and creativity. As such there is often confusion when operationalising the method in practice. This article aims to clarify and further develop the concept of mechanism in realist evaluation and in doing so aid the learning of those operationalising the methodology. Using a social science illustration, we argue that disaggregating the concept of mechanism into its constituent parts helps to understand the difference between the resources offered by the intervention and the ways in which this changes the reasoning of participants. This in turn helps to distinguish between a context and mechanism. The notion of mechanisms ‘firing’ in social science research is explored, with discussions surrounding how this may stifle researchers’ realist thinking. We underline the importance of conceptualising mechanisms as operating on a continuum, rather than as an ‘on/off’ switch. The discussions in this article will hopefully progress and operationalise realist methods. This development is likely to occur due to the infancy of the methodology and its recent increased profile and use in social science research. The arguments we present have been tested and are explained throughout the article using a social science illustration, evidencing their usability and value.

515 citations

Journal ArticleDOI
TL;DR: These reporting standards for realist evaluations have been developed by drawing on a range of sources and it is hoped that these standards will lead to greater consistency and rigour of reporting and make realist evaluation reports more accessible, usable and helpful to different stakeholders.
Abstract: Realist evaluation is increasingly used in health services and other fields of research and evaluation. No previous standards exist for reporting realist evaluations. This standard was developed as part of the RAMESES II project. The project’s aim is to produce initial reporting standards for realist evaluations. We purposively recruited a maximum variety sample of an international group of experts in realist evaluation to our online Delphi panel. Panel members came from a variety of disciplines, sectors and policy fields. We prepared the briefing materials for our Delphi panel by summarising the most recent literature on realist evaluations to identify how and why rigour had been demonstrated and where gaps in expertise and rigour were evident. We also drew on our collective experience as realist evaluators, in training and supporting realist evaluations, and on the RAMESES email list to help us develop the briefing materials. Through discussion within the project team, we developed a list of issues related to quality that needed to be addressed when carrying out realist evaluations. These were then shared with the panel members and their feedback was sought. Once the panel members had provided their feedback on our briefing materials, we constructed a set of items for potential inclusion in the reporting standards and circulated these online to panel members. Panel members were asked to rank each potential item twice on a 7-point Likert scale, once for relevance and once for validity. They were also encouraged to provide free text comments. We recruited 35 panel members from 27 organisations across six countries from nine different disciplines. Within three rounds our Delphi panel was able to reach consensus on 20 items that should be included in the reporting standards for realist evaluations. The overall response rates for all items for rounds 1, 2 and 3 were 94 %, 76 % and 80 %, respectively. These reporting standards for realist evaluations have been developed by drawing on a range of sources. We hope that these standards will lead to greater consistency and rigour of reporting and make realist evaluation reports more accessible, usable and helpful to different stakeholders.

405 citations

Journal ArticleDOI
TL;DR: It is argued that systematic reviews and narrative reviews serve different purposes and should be viewed as complementary.
Abstract: Systematic reviews are generally placed above narrative reviews in an assumed hierarchy of secondary research evidence. We argue that systematic reviews and narrative reviews serve different purposes and should be viewed as complementary. Conventional systematic reviews address narrowly focused questions; their key contribution is summarising data. Narrative reviews provide interpretation and critique; their key contribution is deepening understanding. This article is protected by copyright. All rights reserved.

402 citations

Journal ArticleDOI
TL;DR: The pooled results of these studies suggest that reporting of many items in the PRISMA Statement is suboptimal, even in the 2382 SRs published after 2009, where nine items were adhered to by fewer than 67% of SRs.
Abstract: The PRISMA Statement is a reporting guideline designed to improve transparency of systematic reviews (SRs) and meta-analyses. Seven extensions to the PRISMA Statement have been published to address the reporting of different types or aspects of SRs, and another eight are in development. We performed a scoping review to map the research that has been conducted to evaluate the uptake and impact of the PRISMA Statement and extensions. We also synthesised studies evaluating how well SRs published after the PRISMA Statement was disseminated adhere to its recommendations. We searched for meta-research studies indexed in MEDLINE® from inception to 31 July 2017, which investigated some component of the PRISMA Statement or extensions (e.g. SR adherence to PRISMA, journal endorsement of PRISMA). One author screened all records and classified the types of evidence available in the studies. We pooled data on SR adherence to individual PRISMA items across all SRs in the included studies and across SRs published after 2009 (the year PRISMA was disseminated). We included 100 meta-research studies. The most common type of evidence available was data on SR adherence to the PRISMA Statement, which has been evaluated in 57 studies that have assessed 6487 SRs. The pooled results of these studies suggest that reporting of many items in the PRISMA Statement is suboptimal, even in the 2382 SRs published after 2009 (where nine items were adhered to by fewer than 67% of SRs). Few meta-research studies have evaluated the adherence of SRs to the PRISMA extensions or strategies to increase adherence to the PRISMA Statement and extensions. Many studies have evaluated how well SRs adhere to the PRISMA Statement, and the pooled result of these suggest that reporting of many items is suboptimal. An update of the PRISMA Statement, along with a toolkit of strategies to help journals endorse and implement the updated guideline, may improve the transparency of SRs.

365 citations

Journal ArticleDOI
TL;DR: The principal conclusion of the evaluation of studies that call themselves “evidence maps” is that the implied definition of what constitutes an evidence map is a systematic search of a broad field to identify gaps in knowledge and/or future research needs that presents results in a user-friendly format, often a visual figure or graph, or a searchable database.
Abstract: The need for systematic methods for reviewing evidence is continuously increasing. Evidence mapping is one emerging method. There are no authoritative recommendations for what constitutes an evidence map or what methods should be used, and anecdotal evidence suggests heterogeneity in both. Our objectives are to identify published evidence maps and to compare and contrast the presented definitions of evidence mapping, the domains used to classify data in evidence maps, and the form the evidence map takes. We conducted a systematic review of publications that presented results with a process termed “evidence mapping” or included a figure called an “evidence map.” We identified publications from searches of ten databases through 8/21/2015, reference mining, and consulting topic experts. We abstracted the research question, the unit of analysis, the search methods and search period covered, and the country of origin. Data were narratively synthesized. Thirty-nine publications met inclusion criteria. Published evidence maps varied in their definition and the form of the evidence map. Of the 31 definitions provided, 67 % described the purpose as identification of gaps and 58 % referenced a stakeholder engagement process or user-friendly product. All evidence maps explicitly used a systematic approach to evidence synthesis. Twenty-six publications referred to a figure or table explicitly called an “evidence map,” eight referred to an online database as the evidence map, and five stated they used a mapping methodology but did not present a visual depiction of the evidence. The principal conclusion of our evaluation of studies that call themselves “evidence maps” is that the implied definition of what constitutes an evidence map is a systematic search of a broad field to identify gaps in knowledge and/or future research needs that presents results in a user-friendly format, often a visual figure or graph, or a searchable database. Foundational work is needed to better standardize the methods and products of an evidence map so that researchers and policymakers will know what to expect of this new type of evidence review. Although an a priori protocol was developed, no registration was completed; this review did not fit the PROSPERO format.

324 citations

References
More filters
Journal ArticleDOI
TL;DR: An Explanation and Elaboration of the PRISMA Statement is presented and updated guidelines for the reporting of systematic reviews and meta-analyses are presented.
Abstract: Systematic reviews and meta-analyses are essential to summarize evidence relating to efficacy and safety of health care interventions accurately and reliably. The clarity and transparency of these reports, however, is not optimal. Poor reporting of systematic reviews diminishes their value to clinicians, policy makers, and other users. Since the development of the QUOROM (QUality Of Reporting Of Meta-analysis) Statement—a reporting guideline published in 1999—there have been several conceptual, methodological, and practical advances regarding the conduct and reporting of systematic reviews and meta-analyses. Also, reviews of published systematic reviews have found that key information about these studies is often poorly reported. Realizing these issues, an international group that included experienced authors and methodologists developed PRISMA (Preferred Reporting Items for Systematic reviews and Meta-Analyses) as an evolution of the original QUOROM guideline for systematic reviews and meta-analyses of evaluations of health care interventions. The PRISMA Statement consists of a 27-item checklist and a four-phase flow diagram. The checklist includes items deemed essential for transparent reporting of a systematic review. In this Explanation and Elaboration document, we explain the meaning and rationale for each checklist item. For each item, we include an example of good reporting and, where possible, references to relevant empirical studies and methodological literature. The PRISMA Statement, this document, and the associated Web site (http://www.prisma-statement.org/) should be helpful resources to improve reporting of systematic reviews and meta-analyses.

25,711 citations

Journal ArticleDOI
21 Jul 2009-BMJ
TL;DR: The meaning and rationale for each checklist item is explained, and an example of good reporting is included and, where possible, references to relevant empirical studies and methodological literature are included.
Abstract: Systematic reviews and meta-analyses are essential to summarise evidence relating to efficacy and safety of healthcare interventions accurately and reliably. The clarity and transparency of these reports, however, are not optimal. Poor reporting of systematic reviews diminishes their value to clinicians, policy makers, and other users. Since the development of the QUOROM (quality of reporting of meta-analysis) statement—a reporting guideline published in 1999—there have been several conceptual, methodological, and practical advances regarding the conduct and reporting of systematic reviews and meta-analyses. Also, reviews of published systematic reviews have found that key information about these studies is often poorly reported. Realising these issues, an international group that included experienced authors and methodologists developed PRISMA (preferred reporting items for systematic reviews and meta-analyses) as an evolution of the original QUOROM guideline for systematic reviews and meta-analyses of evaluations of health care interventions. The PRISMA statement consists of a 27-item checklist and a four-phase flow diagram. The checklist includes items deemed essential for transparent reporting of a systematic review. In this explanation and elaboration document, we explain the meaning and rationale for each checklist item. For each item, we include an example of good reporting and, where possible, references to relevant empirical studies and methodological literature. The PRISMA statement, this document, and the associated website (www.prisma-statement.org/) should be helpful resources to improve reporting of systematic reviews and meta-analyses.

13,813 citations


"RAMESES publication standards: real..." refers methods in this paper

  • ...Publication standards are common (and, increasingly, expected) - in health services research - see, for example, CONSORT for randomized controlled trials [16], AGREE for clinical guidelines [17], PRISMA for Cochrane-style systematic reviews [18] and SQUIRE for quality improvement studies [19]....

    [...]

  • ...The layout of this document has drawn on previous methodological publications and, in particular, on the ‘Explanations and Elaborations’ document of the PRISMA statement [18]....

    [...]

Journal ArticleDOI
24 Mar 2010-BMJ
TL;DR: This update of the CONSORT statement improves the wording and clarity of the previous checklist and incorporates recommendations related to topics that have only recently received recognition, such as selective outcome reporting bias.
Abstract: Overwhelming evidence shows the quality of reporting of randomised controlled trials (RCTs) is not optimal. Without transparent reporting, readers cannot judge the reliability and validity of trial findings nor extract information for systematic reviews. Recent methodological analyses indicate that inadequate reporting and design are associated with biased estimates of treatment effects. Such systematic error is seriously damaging to RCTs, which are considered the gold standard for evaluating interventions because of their ability to minimise or avoid bias. A group of scientists and editors developed the CONSORT (Consolidated Standards of Reporting Trials) statement to improve the quality of reporting of RCTs. It was first published in 1996 and updated in 2001. The statement consists of a checklist and flow diagram that authors can use for reporting an RCT. Many leading medical journals and major international editorial groups have endorsed the CONSORT statement. The statement facilitates critical appraisal and interpretation of RCTs. During the 2001 CONSORT revision, it became clear that explanation and elaboration of the principles underlying the CONSORT statement would help investigators and others to write or appraise trial reports. A CONSORT explanation and elaboration article was published in 2001 alongside the 2001 version of the CONSORT statement. After an expert meeting in January 2007, the CONSORT statement has been further revised and is published as the CONSORT 2010 Statement. This update improves the wording and clarity of the previous checklist and incorporates recommendations related to topics that have only recently received recognition, such as selective outcome reporting bias. This explanatory and elaboration document-intended to enhance the use, understanding, and dissemination of the CONSORT statement-has also been extensively revised. It presents the meaning and rationale for each new and updated checklist item providing examples of good reporting and, where possible, references to relevant empirical studies. Several examples of flow diagrams are included. The CONSORT 2010 Statement, this revised explanatory and elaboration document, and the associated website (www.consort-statement.org) should be helpful resources to improve reporting of randomised trials.

5,957 citations

Journal ArticleDOI
TL;DR: A strategy was needed to differentiate among guidelines and ensure that those guidelines that vary widely in quality are distinguished.
Abstract: Clinical practice guidelines, which are systematically developed statements aimed at helping people make clinical, policy-related and system-related decisions,[1][1],[2][2] frequently vary widely in quality.[3][3],[4][4] A strategy was needed to differentiate among guidelines and ensure that those

2,616 citations


"RAMESES publication standards: real..." refers methods in this paper

  • ...Publication standards are common (and, increasingly, expected) - in health services research - see, for example, CONSORT for randomized controlled trials [16], AGREE for clinical guidelines [17], PRISMA for Cochrane-style systematic reviews [18] and SQUIRE for quality improvement studies [19]....

    [...]

Journal ArticleDOI
TL;DR: A model of research synthesis designed to work with complex social interventions or programmes, and which is based on the emerging ‘realist’ approach to evaluation is offered, to enable decision-makers to reach a deeper understanding of the intervention and how it can be made to work most effectively.
Abstract: Evidence-based policy is a dominant theme in contemporary public services but the practical realities and challenges involved in using evidence in policy-making are formidable. Part of the problem is one of complexity. In health services and other public services, we are dealing with complex social interventions which act on complex social systems--things like league tables, performance measures, regulation and inspection, or funding reforms. These are not 'magic bullets' which will always hit their target, but programmes whose effects are crucially dependent on context and implementation. Traditional methods of review focus on measuring and reporting on programme effectiveness, often find that the evidence is mixed or conflicting, and provide little or no clue as to why the intervention worked or did not work when applied in different contexts or circumstances, deployed by different stakeholders, or used for different purposes. This paper offers a model of research synthesis which is designed to work with complex social interventions or programmes, and which is based on the emerging 'realist' approach to evaluation. It provides an explanatory analysis aimed at discerning what works for whom, in what circumstances, in what respects and how. The first step is to make explicit the programme theory (or theories)--the underlying assumptions about how an intervention is meant to work and what impacts it is expected to have. We then look for empirical evidence to populate this theoretical framework, supporting, contradicting or modifying the programme theories as it goes. The results of the review combine theoretical understanding and empirical evidence, and focus on explaining the relationship between the context in which the intervention is applied, the mechanisms by which it works and the outcomes which are produced. The aim is to enable decision-makers to reach a deeper understanding of the intervention and how it can be made to work most effectively. Realist review does not provide simple answers to complex questions. It will not tell policy-makers or managers whether something works or not, but will provide the policy and practice community with the kind of rich, detailed and highly practical understanding of complex social interventions which is likely to be of much more use to them when planning and implementing programmes at a national, regional or local level.

2,297 citations


"RAMESES publication standards: real..." refers background or methods in this paper

  • ...“Realist synthesis” was first described by Ray Pawson in 2002 [13], updated in an ESRC (Economic and Social Research Council) commissioned monograph in 2004 [14], published as a book in 2006 [1] and summarizsed in a short methods paper in 2005 [15]....

    [...]

  • ...They are not intended to provide detailed guidance on how to conduct such a synthesis; for this, we direct interested readers to summary articles [15,22] or various publications on methods [1,11,14,23]....

    [...]