scispace - formally typeset
Open accessJournal ArticleDOI: 10.1177/0272989X21996622

Basing Information on Comprehensive, Critically Appraised, and Up-to-Date Syntheses of the Scientific Evidence: An Update from the International Patient Decision Aid Standards.

04 Mar 2021-Medical Decision Making (SAGE PublicationsSage CA: Los Angeles, CA)-Vol. 41, Iss: 7, pp 755-767
Abstract: BackgroundPatients and clinicians expect the information in patient decision aids to be based on the best available research evidence. The objectives of this International Patient Decision Aid Stan...

... read more

Citations
  More

5 results found


Open accessJournal ArticleDOI: 10.1177/0272989X211037946
Abstract: BackgroundPatient decision aids should help people make evidence-informed decisions aligned with their values. There is limited guidance about how to achieve such alignment.PurposeTo describe the r...

... read more

3 Citations



Open accessJournal ArticleDOI: 10.1111/HEX.13244
Abstract: Background Patient decision aids (PDAs) should provide evidence-based information so patients can make informed decisions. Yet, PDA developers do not have an agreed-upon process to select, synthesize and present evidence in PDAs. Objective To reach the consensus on an evidence summarization process for PDAs. Design A two-round modified Delphi survey. Setting and participants A group of international experts in PDA development invited developers, scientific networks, patient groups and listservs to complete Delphi surveys. Data collection We emailed participants the study description and a link to the online survey. Participants were asked to rate each potential criterion (omit, possible, desirable, essential) and provide qualitative feedback. Analysis Criteria in each round were retained if rated by >80% of participants as desirable or essential. If two or more participants suggested rewording, reordering or merging, the steering group considered the suggestion. Results Following two Delphi survey rounds, the evidence summarization process included defining the decision, reporting the processes and policies of the evidence summarization process, assembling the editorial team and managing (collect, manage, report) their conflicts of interest, conducting a systematic search, selecting and appraising the evidence, presenting the harms and benefits in plain language, and describing the method of seeking external review and the plan for updating the evidence (search, selection and appraisal of new evidence). Conclusion A multidisciplinary stakeholder group reached consensus on an evidence summarization process to guide the creation of high-quality PDAs. Patient contribution A patient partner was part of the steering group and involved in the development of the Delphi survey.

... read more

Topics: Delphi method (57%), Automatic summarization (57%), Evidence-based medicine (54%) ... read more


Open accessPosted ContentDOI: 10.1101/2021.11.08.21266077
09 Nov 2021-medRxiv
Abstract: Background: Multiple randomized controlled trials have shown that it is safe and effective to treat appendicitis with antibiotics or surgery. There are no tools available to assist surgeons and their patients in choosing the optimal treatment for each individual patient. Here we describe the development of a new decisions support tool (DST) for acute appendicitis and place it in the context of international guidelines for decision aid development. Methods: The stakeholder engagement and development process for the DST is described. The DST and its development process are placed in the context of the International Patient Decision Aid Standards (IPDAS) and the DEVELOPTOOLS checklist for a user-centered design process. Results: A diverse group of over 60 stakeholders were involved in the needs-assessment, development, and evaluation of the DST. The development process met 11/11 of the scored items on the DEVELOPTOOLS checklist. Of the 34 applicable IPDAS items, the current version of the DST meets 31 of them including 6/6 qualifying criteria, 6/6 certification criteria, and 18/22 quality criteria. Conclusions: The novel appendicitis DST was developed with the input of multiple stakeholders. The development process and the tool itself complies with best practices recommended by the IPDAS.

... read more

References
  More

33 results found


Open accessJournal ArticleDOI: 10.1186/S13643-016-0384-4
05 Dec 2016-Systematic Reviews
Abstract: Synthesis of multiple randomized controlled trials (RCTs) in a systematic review can summarize the effects of individual outcomes and provide numerical answers about the effectiveness of interventions. Filtering of searches is time consuming, and no single method fulfills the principal requirements of speed with accuracy. Automation of systematic reviews is driven by a necessity to expedite the availability of current best evidence for policy and clinical decision-making. We developed Rayyan ( http://rayyan.qcri.org ), a free web and mobile app, that helps expedite the initial screening of abstracts and titles using a process of semi-automation while incorporating a high level of usability. For the beta testing phase, we used two published Cochrane reviews in which included studies had been selected manually. Their searches, with 1030 records and 273 records, were uploaded to Rayyan. Different features of Rayyan were tested using these two reviews. We also conducted a survey of Rayyan’s users and collected feedback through a built-in feature. Pilot testing of Rayyan focused on usability, accuracy against manual methods, and the added value of the prediction feature. The “taster” review (273 records) allowed a quick overview of Rayyan for early comments on usability. The second review (1030 records) required several iterations to identify the previously identified 11 trials. The “suggestions” and “hints,” based on the “prediction model,” appeared as testing progressed beyond five included studies. Post rollout user experiences and a reflexive response by the developers enabled real-time modifications and improvements. The survey respondents reported 40% average time savings when using Rayyan compared to others tools, with 34% of the respondents reporting more than 50% time savings. In addition, around 75% of the respondents mentioned that screening and labeling studies as well as collaborating on reviews to be the two most important features of Rayyan. As of November 2016, Rayyan users exceed 2000 from over 60 countries conducting hundreds of reviews totaling more than 1.6M citations. Feedback from users, obtained mostly through the app web site and a recent survey, has highlighted the ease in exploration of searches, the time saved, and simplicity in sharing and comparing include-exclude decisions. The strongest features of the app, identified and reported in user feedback, were its ability to help in screening and collaboration as well as the time savings it affords to users. Rayyan is responsive and intuitive in use with significant potential to lighten the load of reviewers.

... read more

Topics: Systematic review (56%), Usability (55%)

2,923 Citations


Open accessJournal ArticleDOI: 10.1136/BMJ.38926.629329.AE
Glyn Elwyn1, Annette M. O'Connor2, Dawn Stacey3, Robert J. Volk  +18 moreInstitutions (4)
24 Aug 2006-BMJ
Abstract: Objective To develop a set of quality criteria for patient decision support technologies (decision aids). Design and setting Two stage web based Delphi process using online rating process to enable international collaboration. Participants Individuals from four stakeholder groups (researchers, practitioners, patients, policy makers) representing 14 countries reviewed evidence summaries and rated the importance of 80 criteria in 12 quality domains ona1to9 scale. Second round participants received feedback from the first round and repeated their assessment of the 80 criteria plus three new ones. Main outcome measure Aggregate ratings for each criterion calculated using medians weighted to compensate for different numbers in stakeholder groups; criteria rated between 7 and 9 were retained. Results 212 nominated people were invited to participate. Of those invited, 122 participated in the first round (77 researchers, 21 patients, 10 practitioners, 14 policy makers); 104/122 (85%) participated in the second round. 74 of 83 criteria were retained in the following domains: systematic development process (9/9 criteria); providing information about options (13/13); presenting probabilities (11/13); clarifying and expressing values (3/3); using patient stories (2/5); guiding/coaching (3/5); disclosing conflicts of interest (5/5); providing internet access (6/6); balanced presentation of options (3/3); using plain language (4/6); basing information on up to date evidence (7/7); and establishing effectiveness (8/8). Conclusions Criteria were given the highest ratings where evidence existed, and these were retained. Gaps in research were highlighted. Developers, users, and purchasers of patient decision aids now have a checklist for appraising quality. An instrument for measuring quality of decision aids is being developed.

... read more

Topics: Delphi method (53%), Decision aids (53%), Patient participation (52%) ... read more

1,329 Citations


Open accessJournal ArticleDOI: 10.1371/JOURNAL.PONE.0004705
Glyn Elwyn1, Annette M. O'Connor2, Carol Bennett2, Robert G. Newcombe1  +20 moreInstitutions (13)
04 Mar 2009-PLOS ONE
Abstract: Objectives To describe the development, validation and inter-rater reliability of an instrument to measure the quality of patient decision support technologies (decision aids). Design Scale development study, involving construct, item and scale development, validation and reliability testing. Setting There has been increasing use of decision support technologies – adjuncts to the discussions clinicians have with patients about difficult decisions. A global interest in developing these interventions exists among both for-profit and not-for-profit organisations. It is therefore essential to have internationally accepted standards to assess the quality of their development, process, content, potential bias and method of field testing and evaluation. Methods Scale development study, involving construct, item and scale development, validation and reliability testing. Participants Twenty-five researcher-members of the International Patient Decision Aid Standards Collaboration worked together to develop the instrument (IPDASi). In the fourth Stage (reliability study), eight raters assessed thirty randomly selected decision support technologies. Results IPDASi measures quality in 10 dimensions, using 47 items, and provides an overall quality score (scaled from 0 to 100) for each intervention. Overall IPDASi scores ranged from 33 to 82 across the decision support technologies sampled (n = 30), enabling discrimination. The inter-rater intraclass correlation for the overall quality score was 0.80. Correlations of dimension scores with the overall score were all positive (0.31 to 0.68). Cronbach's alpha values for the 8 raters ranged from 0.72 to 0.93. Cronbach's alphas based on the dimension means ranged from 0.50 to 0.81, indicating that the dimensions, although well correlated, measure different aspects of decision support technology quality. A short version (19 items) was also developed that had very similar mean scores to IPDASi and high correlation between short score and overall score 0.87 (CI 0.79 to 0.92). Conclusions This work demonstrates that IPDASi has the ability to assess the quality of decision support technologies. The existing IPDASi provides an assessment of the quality of a DST's components and will be used as a tool to provide formative advice to DSTs developers and summative assessments for those who want to compare their tools against an existing benchmark.

... read more

Topics: Decision support system (58%), Cronbach's alpha (55%), Decision aids (55%) ... read more

366 Citations


Open accessJournal ArticleDOI: 10.1186/S13012-017-0688-3
Simon Lewin1, Simon Lewin2, Andrew Booth3, Claire Glenton2  +11 moreInstitutions (9)
Abstract: The GRADE-CERQual (‘Confidence in the Evidence from Reviews of Qualitative research’) approach provides guidance for assessing how much confidence to place in findings from systematic reviews of qualitative research (or qualitative evidence syntheses). The approach has been developed to support the use of findings from qualitative evidence syntheses in decision-making, including guideline development and policy formulation. Confidence in the evidence from qualitative evidence syntheses is an assessment of the extent to which a review finding is a reasonable representation of the phenomenon of interest. CERQual provides a systematic and transparent framework for assessing confidence in individual review findings, based on consideration of four components: (1) methodological limitations, (2) coherence, (3) adequacy of data, and (4) relevance. A fifth component, dissemination (or publication) bias, may also be important and is being explored. As with the GRADE (Grading of Recommendations Assessment, Development, and Evaluation) approach for effectiveness evidence, CERQual suggests summarising evidence in succinct, transparent, and informative Summary of Qualitative Findings tables. These tables are designed to communicate the review findings and the CERQual assessment of confidence in each finding. This article is the first of a seven-part series providing guidance on how to apply the CERQual approach. In this paper, we describe the rationale and conceptual basis for CERQual, the aims of the approach, how the approach was developed, and its main components. We also outline the purpose and structure of this series and discuss the growing role for qualitative evidence in decision-making. Papers 3, 4, 5, 6, and 7 in this series discuss each CERQual component, including the rationale for including the component in the approach, how the component is conceptualised, and how it should be assessed. Paper 2 discusses how to make an overall assessment of confidence in a review finding and how to create a Summary of Qualitative Findings table. The series is intended primarily for those undertaking qualitative evidence syntheses or using their findings in decision-making processes but is also relevant to guideline development agencies, primary qualitative researchers, and implementation scientists and practitioners.

... read more

309 Citations


Open accessJournal ArticleDOI: 10.1136/BMJ.H870
Alfonso Iorio1, Frederick A. Spencer1, Maicon Falavigna2, C. Alba3  +13 moreInstitutions (13)
16 Mar 2015-BMJ
Abstract: Introduction The term prognosis refers to the likelihood of future health outcomes in people with a given disease or health condition or with particular characteristics such as age, sex, or genetic profile. Patients and healthcare providers may be interested in prognosis for several reasons, so prognostic studies may have a variety of purposes,1–4 including establishing typical prognosis in a broad population, establishing the effect of patients’ characteristics on prognosis, and developing a prognostic model (often referred to as a clinical prediction rule) (Table 1). Considerations in determining the trustworthiness of estimates of prognosis arising from these types of studies differ. This article covers studies answering questions about the prognosis of a typical patient from a broadly defined population; we will consider prognostic studies assessing risk factors and clinical prediction guides in subsequent papers. Knowing the likely course of their disease may help patients to come to terms with, and plan for, the future. Knowledge of the risk of adverse outcomes or the likelihood of spontaneous resolution of symptoms is critical in predicting the likely effect of treatment and planning diagnostic investigations.5 If the probability of facing an adverse outcome is very low or the spontaneous remission of the disease is high (“good prognosis”), the possible absolute benefits of treatment will inevitably be low and serious adverse effects related to treatment or invasive diagnostic tests, even if rare, will loom large in any decision. If instead the probability of an adverse outcome is high (“bad prognosis”), the impact of new diagnostic information or of effective treatment may be large and patients may be ready to accept higher risks of diagnostic investigation and treatment related adverse effects. Inquiry into the credibility or trustworthiness of prognostic estimates has, to date, largely focused on individual studies of prognosis. Systematic reviews of the highest quality evidence including all the prognostic studies assessing a particular clinical situation are, however, gaining increasing attention, including the Cochrane Collaboration’s work (in progress) to define a template for reviews of prognostic studies (http://prognosismethods.cochrane.org/scope-ourwork). Trustworthy systematic reviews will not only ensure comprehensive collection, summarization, and critique of the primary studies but will also conduct optimal analyses. Matters that warrant consideration in such analyses include the method used to pool rates and whether analyses account for all the relevant covariates; the literature provides guidance on both questions.6 7 In this article, we consider how to establish degree of confidence in estimates from such bodies of evidence. The guidance in this article is directed primarily at researchers conducting systematic reviews of prognostic studies. It will also be useful to anyone interested in prognostic estimates and their associated confidence (including guideline developers) when evaluating a body of evidence (for example, a guideline panel using baseline risk estimates to estimate the absolute effect of Summary poIntS

... read more

299 Citations