scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Assessing the Quality of Decision Support Technologies Using the International Patient Decision Aid Standards instrument (IPDASi)

TL;DR: The existing IPDASi provides an assessment of the quality of a DST's components and will be used as a tool to provide formative advice to DSTs developers and summative assessments for those who want to compare their tools against an existing benchmark.
Abstract: Objectives To describe the development, validation and inter-rater reliability of an instrument to measure the quality of patient decision support technologies (decision aids). Design Scale development study, involving construct, item and scale development, validation and reliability testing. Setting There has been increasing use of decision support technologies – adjuncts to the discussions clinicians have with patients about difficult decisions. A global interest in developing these interventions exists among both for-profit and not-for-profit organisations. It is therefore essential to have internationally accepted standards to assess the quality of their development, process, content, potential bias and method of field testing and evaluation. Methods Scale development study, involving construct, item and scale development, validation and reliability testing. Participants Twenty-five researcher-members of the International Patient Decision Aid Standards Collaboration worked together to develop the instrument (IPDASi). In the fourth Stage (reliability study), eight raters assessed thirty randomly selected decision support technologies. Results IPDASi measures quality in 10 dimensions, using 47 items, and provides an overall quality score (scaled from 0 to 100) for each intervention. Overall IPDASi scores ranged from 33 to 82 across the decision support technologies sampled (n = 30), enabling discrimination. The inter-rater intraclass correlation for the overall quality score was 0.80. Correlations of dimension scores with the overall score were all positive (0.31 to 0.68). Cronbach's alpha values for the 8 raters ranged from 0.72 to 0.93. Cronbach's alphas based on the dimension means ranged from 0.50 to 0.81, indicating that the dimensions, although well correlated, measure different aspects of decision support technology quality. A short version (19 items) was also developed that had very similar mean scores to IPDASi and high correlation between short score and overall score 0.87 (CI 0.79 to 0.92). Conclusions This work demonstrates that IPDASi has the ability to assess the quality of decision support technologies. The existing IPDASi provides an assessment of the quality of a DST's components and will be used as a tool to provide formative advice to DSTs developers and summative assessments for those who want to compare their tools against an existing benchmark.

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
14 Oct 2010-BMJ
TL;DR: Creation of a platform of tools to provide information to doctors and patients should be the first step in giving patients choice about their treatment, say Glyn Elwyn and colleagues.
Abstract: Creation of a platform of tools to provide information to doctors and patients should be the first step in giving patients choice about their treatment, say Glyn Elwyn and colleagues.

788 citations

Journal ArticleDOI
TL;DR: The results point to significant challenges to the implementation of patient decision support using a referral model, including indifference on the part of health care professionals, and the lack of incentives that reward the use of these interventions needs to be considered as a significant impediment.
Abstract: Two decades of research has established the positive effect of using patient-targeted decision support interventions: patients gain knowledge, greater understanding of probabilities and increased confidence in decisions. Yet, despite their efficacy, the effectiveness of these decision support interventions in routine practice has yet to be established; widespread adoption has not occurred. The aim of this review was to search for and analyze the findings of published peer-reviewed studies that investigated the success levels of strategies or methods where attempts were made to implement patient-targeted decision support interventions into routine clinical settings. An electronic search strategy was devised and adapted for the following databases: ASSIA, CINAHL, Embase, HMIC, Medline, Medline-in-process, OpenSIGLE, PsycINFO, Scopus, Social Services Abstracts, and the Web of Science. In addition, we used snowballing techniques. Studies were included after dual independent assessment. After assessment, 5322 abstracts yielded 51 articles for consideration. After examining full-texts, 17 studies were included and subjected to data extraction. The approach used in all studies was one where clinicians and their staff used a referral model, asking eligible patients to use decision support. The results point to significant challenges to the implementation of patient decision support using this model, including indifference on the part of health care professionals. This indifference stemmed from a reported lack of confidence in the content of decision support interventions and concern about disruption to established workflows, ultimately contributing to organizational inertia regarding their adoption. It seems too early to make firm recommendations about how best to implement patient decision support into routine practice because approaches that use a ‘referral model’ consistently report difficulties. We sense that the underlying issues that militate against the use of patient decision support and, more generally, limit the adoption of shared decision making, are under-investigated and under-specified. Future reports from implementation studies could be improved by following guidelines, for example the SQUIRE proposals, and by adopting methods that would be able to go beyond the ‘barriers’ and ‘facilitators’ approach to understand more about the nature of professional and organizational resistance to these tools. The lack of incentives that reward the use of these interventions needs to be considered as a significant impediment.

399 citations


Cites methods from "Assessing the Quality of Decision S..."

  • ...The stimulus for this review arose from work being undertaken by the International Patient Decision Aid Standards (IPDAS) Collaboration, which has produced a checklist [15] and an instrument to assess the quality of these interventions [16]....

    [...]

Journal ArticleDOI
TL;DR: A modified Delphi consensus process is reported to agree on IPDASi (v3.0) items that should be considered as minimum standards for PDA certification, for inclusion in the refined IPDasi ( v4.0).
Abstract: Objective. The IPDAS Collaboration has developed a checklist and an instrument (IPDASi v3.0) to assess the quality of patient decision aids (PDAs) in terms of their development process and shared decision-making design components. Certification of PDAs is of growing interest in the US and elsewhere. We report a modified Delphi consensus process to agree on IPDASi (v3.0) items that should be considered as minimum standards for PDA certification, for inclusion in the refined IPDASi (v4.0). Methods. A 2-stage Delphi voting process considered the inclusion of IPDASi (v3.0) items as minimum standards. Item scores and qualitative comments were analyzed, followed by expert group discussion. Results. One hundred and one people voted in round 1; 87 in round 2. Forty-seven items were reduced to 44 items across 3 new categories: 1) qualifying criteria, which are required in order for an intervention to be considered a decision aid (6 items); 2) certification criteria, without which a decision aid is judged to have a high risk of harmful bias (10 items); and 3) quality criteria, believed to strengthen a decision aid but whose omission does not present a high risk of harmful bias (28 items). Conclusions. This study provides preliminary certification criteria for PDAs. Scoring and rating processes need to be tested and finalized. However, the process of appraising the quality of the clinical evidence reported by the PDA should be used to complement these criteria; the proposed standards are designed to rate the quality of the development process and shared decision-making design elements, not the quality of the PDA’s clinical content.

358 citations


Cites background or methods from "Assessing the Quality of Decision S..."

  • ...IPDASi criteria have provided a good measure for assessing the quality of patient decision aids and have proven internal reliability.(6) However, no decision has been made about the level of score at which decision aids would be considered good quality, appropriate, and of a necessary standard for use....

    [...]

  • ...0).(6) The aim of this study was to develop the IPDASi (v4....

    [...]

  • ...Last, and most relevant to setting certification criteria for decision aids, the International Patient Decision Aids Standards (IPDAS) Collaboration has worked since 2003 on developing a quality criteria checklist(5) and a quantitative assessment tool based on this checklist (IPDASi).(6) The IPDAS checklist(5) can be used to assess the quality of decision aids across 12 dimensions, using 74 specific criteria....

    [...]

  • ...Strengths and Weaknesses The current work builds upon a strong foundation of previous work focusing on the assessment of decision aid quality.(5,6) The appropriate use of the Delphi method ensured that a wide range of international researchers were involved in the selection process, many of whom are experts in this field....

    [...]

Journal ArticleDOI
TL;DR: 15 suggestions that are likely to improve the effectiveness of feedback across a range of contexts are identified and underutilized in the literature, given that their specific mechanisms of effectiveness have seldom been explored in detail.
Abstract: Electronic practice data are increasingly being used to provide feedback to encourage practice improvement. However, evidence suggests that despite decades of experience, the effects of such interventions vary greatly and are not improving over time. Guidance on providing more effective feedback does exist, but it is distributed across a wide range of disciplines and theoretical perspectives. Through expert interviews; systematic reviews; and experience with providing, evaluating, and receiving practice feedback, 15 suggestions that are believed to be associated with effective feedback interventions have been identified. These suggestions are intended to provide practical guidance to quality improvement professionals, information technology developers, educators, administrators, and practitioners who receive such interventions. Designing interventions with these suggestions in mind should improve their effect, and studying the mechanisms underlying these suggestions will advance a stagnant literature.

284 citations


Cites background from "Assessing the Quality of Decision S..."

  • ...Techniques for enhancing perceived credibility of health information include characterizing the quality of the data underlying the feedback, disclosing and highlighting the credibility of the source of the feedback (52), explicitly addressing possible issues with conflicts of interest, and clarifying the extent to which the feedback applies specifically to the provider's individual practice....

    [...]

Journal ArticleDOI
26 Oct 2010-BMJ
TL;DR: Tailored decision support information can be effective in supporting informed choices and greater involvement in decisions about faecal occult blood testing among adults with low levels of education, without increasing anxiety or worry about developing bowel cancer.
Abstract: Objective To determine whether a decision aid designed for adults with low education and literacy can support informed choice and involvement in decisions about screening for bowel cancer. Design Randomised controlled trial. Setting Areas in New South Wales, Australia identified as socioeconomically disadvantaged (low education attainment, high unemployment, and unskilled occupations). Participants 572 adults aged between 55 and 64 with low educational attainment, eligible for bowel cancer screening. Intervention Patient decision aid comprising a paper based interactive booklet (with and without a question prompt list) and a DVD, presenting quantitative risk information on the possible outcomes of screening using faecal occult blood testing compared with no testing. The control group received standard information developed for the Australian national bowel screening programme. All materials and a faecal occult blood test kit were posted directly to people’s homes. Main outcome measures Informed choice (adequate knowledge and consistency between attitudes and screening behaviour) and preferences for involvement in screening decisions. Results Participants who received the decision aid showed higher levels of knowledge than the controls; the mean score (maximum score 12) for the decision aid group was 6.50 (95% confidence interval 6.15 to 6.84) and for the control group was 4.10 (3.85 to 4.36; P Conclusions Tailored decision support information can be effective in supporting informed choices and greater involvement in decisions about faecal occult blood testing among adults with low levels of education, without increasing anxiety or worry about developing bowel cancer. Using a decision aid to make an informed choice may, however, lead to lower uptake of screening. Trial registration ClinicalTrials.gov NCT00765869 and Australian New Zealand Clinical Trials Registry 12608000011381.

222 citations


Cites background from "Assessing the Quality of Decision S..."

  • ...We think it is likely the decision aid will be understood by a better educated group and will support informed choice. However, we cannot predict the effect on screening behaviour or attitudes among a better educated population. Some researchers have suggested that information about harms may differentially dissuade lower education groups compared with their higher educated counterparts from carrying out preventive health behaviour since it encourages a focus on immediate harmful consequences and may bias participants with lower education away from valuing future benefits....

    [...]

References
More filters
Book
07 Dec 1989
TL;DR: In this article, the authors propose three basic concepts: devising the items, selecting the items and selecting the responses, from items to scales, reliability and validity of the responses.
Abstract: 1. Introduction 2. Basic concepts 3. Devising the items 4. Scaling responses 5. Selecting the items 6. Biases in responding 7. From items to scales 8. Reliability 9. Generalizability theory 10. Validity 11. Measuring change 12. Item response theory 13. Methods of administration 14. Ethical considerations 15. Reporting test results Appendices

9,316 citations

Journal ArticleDOI
TL;DR: How and why various modern computing concepts, such as object-orientation and run-time linking, feature in the software's design are discussed and how the framework may be extended.
Abstract: WinBUGS is a fully extensible modular framework for constructing and analysing Bayesian full probability models. Models may be specified either textually via the BUGS language or pictorially using a graphical interface called DoodleBUGS. WinBUGS processes the model specification and constructs an object-oriented representation of the model. The software offers a user-interface, based on dialogue boxes and menu commands, through which the model may then be analysed using Markov chain Monte Carlo techniques. In this paper we discuss how and why various modern computing concepts, such as object-orientation and run-time linking, feature in the software's design. We also discuss how the framework may be extended. It is possible to write specific applications that form an apparently seamless interface with WinBUGS for users with specialized requirements. It is also possible to interface with WinBUGS at a lower level by incorporating new object types that may be used by WinBUGS without knowledge of the modules in which they are implemented. Neither of these types of extension require access to, or even recompilation of, the WinBUGS source-code.

5,620 citations


"Assessing the Quality of Decision S..." refers methods in this paper

  • ...To achieve this, components of variation were determined by Bayesian modelling (Markov chain Monte Carlo) using WinBugs software [15], to arrive at estimated confidence interval half-widths for differing future rating situations....

    [...]

Journal ArticleDOI
TL;DR: Decision aids reduced the proportion of undecided participants and appeared to have a positive effect on patient-clinician communication, and those exposed to a decision aid were either equally or more satisfied with their decision, the decision-making process, and the preparation for decision making compared to usual care.
Abstract: Background Decision aids are intended to help people participate in decisions that involve weighing the benefits and harms of treatment options often with scientific uncertainty. Objectives To assess the effects of decision aids for people facing treatment or screening decisions. Search methods For this update, we searched from 2009 to June 2012 in MEDLINE; CENTRAL; EMBASE; PsycINFO; and grey literature. Cumulatively, we have searched each database since its start date including CINAHL (to September 2008). Selection criteria We included published randomized controlled trials of decision aids, which are interventions designed to support patients' decision making by making explicit the decision, providing information about treatment or screening options and their associated outcomes, compared to usual care and/or alternative interventions. We excluded studies of participants making hypothetical decisions. Data collection and analysis Two review authors independently screened citations for inclusion, extracted data, and assessed risk of bias. The primary outcomes, based on the International Patient Decision Aid Standards (IPDAS), were: A) 'choice made' attributes; B) 'decision-making process' attributes. Secondary outcomes were behavioral, health, and health-system effects. We pooled results using mean differences (MD) and relative risks (RR), applying a random-effects model. Main results This update includes 33 new studies for a total of 115 studies involving 34,444 participants. For risk of bias, selective outcome reporting and blinding of participants and personnel were mostly rated as unclear due to inadequate reporting. Based on 7 items, 8 of 115 studies had high risk of bias for 1 or 2 items each. Of 115 included studies, 88 (76.5%) used at least one of the IPDAS effectiveness criteria: A) 'choice made' attributes criteria: knowledge scores (76 studies); accurate risk perceptions (25 studies); and informed value-based choice (20 studies); and B) 'decision-making process' attributes criteria: feeling informed (34 studies) and feeling clear about values (29 studies). A) Criteria involving 'choice made' attributes: Compared to usual care, decision aids increased knowledge (MD 13.34 out of 100; 95% confidence interval (CI) 11.17 to 15.51; n = 42). When more detailed decision aids were compared to simple decision aids, the relative improvement in knowledge was significant (MD 5.52 out of 100; 95% CI 3.90 to 7.15; n = 19). Exposure to a decision aid with expressed probabilities resulted in a higher proportion of people with accurate risk perceptions (RR 1.82; 95% CI 1.52 to 2.16; n = 19). Exposure to a decision aid with explicit values clarification resulted in a higher proportion of patients choosing an option congruent with their values (RR 1.51; 95% CI 1.17 to 1.96; n = 13). B) Criteria involving 'decision-making process' attributes: Decision aids compared to usual care interventions resulted in: a) lower decisional conflict related to feeling uninformed (MD -7.26 of 100; 95% CI -9.73 to -4.78; n = 22) and feeling unclear about personal values (MD -6.09; 95% CI -8.50 to -3.67; n = 18); b) reduced proportions of people who were passive in decision making (RR 0.66; 95% CI 0.53 to 0.81; n = 14); and c) reduced proportions of people who remained undecided post-intervention (RR 0.59; 95% CI 0.47 to 0.72; n = 18). Decision aids appeared to have a positive effect on patient-practitioner communication in all nine studies that measured this outcome. For satisfaction with the decision (n = 20), decision-making process (n = 17), and/or preparation for decision making (n = 3), those exposed to a decision aid were either more satisfied, or there was no difference between the decision aid versus comparison interventions. No studies evaluated decision-making process attributes for helping patients to recognize that a decision needs to be made, or understanding that values affect the choice. C) Secondary outcomes Exposure to decision aids compared to usual care reduced the number of people of choosing major elective invasive surgery in favour of more conservative options (RR 0.79; 95% CI 0.68 to 0.93; n = 15). Exposure to decision aids compared to usual care reduced the number of people choosing to have prostate-specific antigen screening (RR 0.87; 95% CI 0.77 to 0.98; n = 9). When detailed compared to simple decision aids were used, fewer people chose menopausal hormone therapy (RR 0.73; 95% CI 0.55 to 0.98; n = 3). For other decisions, the effect on choices was variable. The effect of decision aids on length of consultation varied from 8 minutes shorter to 23 minutes longer (median 2.55 minutes longer) with 2 studies indicating statistically-significantly longer, 1 study shorter, and 6 studies reporting no difference in consultation length. Groups of patients receiving decision aids do not appear to differ from comparison groups in terms of anxiety (n = 30), general health outcomes (n = 11), and condition-specific health outcomes (n = 11). The effects of decision aids on other outcomes (adherence to the decision, costs/resource use) were inconclusive. Authors' conclusions There is high-quality evidence that decision aids compared to usual care improve people's knowledge regarding options, and reduce their decisional conflict related to feeling uninformed and unclear about their personal values. There is moderate-quality evidence that decision aids compared to usual care stimulate people to take a more active role in decision making, and improve accurate risk perceptions when probabilities are included in decision aids, compared to not being included. There is low-quality evidence that decision aids improve congruence between the chosen option and the patient's values. New for this updated review is further evidence indicating more informed, values-based choices, and improved patient-practitioner communication. There is a variable effect of decision aids on length of consultation. Consistent with findings from the previous review, decision aids have a variable effect on choices. They reduce the number of people choosing discretionary surgery and have no apparent adverse effects on health outcomes or satisfaction. The effects on adherence with the chosen option, cost-effectiveness, use with lower literacy populations, and level of detail needed in decision aids need further evaluation. Little is known about the degree of detail that decision aids need in order to have a positive effect on attributes of the choice made, or the decision-making process.

5,042 citations

Journal ArticleDOI
16 Sep 2000-BMJ
TL;DR: The design and execution of research required to address the additional problems resulting from evaluation of complex interventions, those “made up of various interconnecting parts,” are examined.
Abstract: Randomised controlled trials are widely accepted as the most reliable method of determining effectiveness, but most trials have evaluated the effects of a single intervention such as a drug. Recognition is increasing that other, non-pharmacological interventions should also be rigorously evaluated.1-3 This paper examines the design and execution of research required to address the additional problems resulting from evaluation of complex interventions—that is, those “made up of various interconnecting parts.”4 The issues dealt with are discussed in a longer Medical Research Council paper (www.mrc.ac.uk/complex_packages.html). We focus on randomised trials but believe that this approach could be adapted to other designs when they are more appropriate. #### Summary points Complex interventions are those that include several components The evaluation of complex interventions is difficult because of problems of developing, identifying, documenting, and reproducing the intervention A phased approach to the development and evaluation of complex interventions is proposed to help researchers define clearly where they are in the research process Evaluation of complex interventions requires use of qualitative and quantitative evidence There are specific difficulties in defining, developing, documenting, and reproducing complex interventions that are subject to more variation than a drug. A typical example would be the design of a trial to evaluate the benefits of specialist stroke units. Such a trial would have to consider the expertise of various health professionals as well as investigations, drugs, treatment guidelines, and arrangements for discharge and follow up. Stroke units may also vary in terms of organisation, management, and skill mix. The active components of the stroke unit may be difficult to specify, making it difficult to replicate the intervention. The box gives other examples of complex interventions. #### Examples of complex interventions Service delivery and organisation: Stroke units Hospital at home Interventions directed at health professionals' behaviour: Strategies for implementing guidelines Computerised decision support Community interventions: Community …

3,235 citations


Additional excerpts

  • ...The IPDAS collaboration and the resulting instruments (IPDASi and IPDASi-SF) need to meet the following challenges: How can new dimensions and items be considered? How are valid ‘option menus’ in DSTs derived and agreed when there are complex debates about equity, economics and evidence? Should there be items that assess the use of theory in the development of these methods, given that these are examples of ‘complex interventions’ and deserve attention to frameworks of design and mode of action [22]....

    [...]

Journal ArticleDOI
TL;DR: The specialty of obstetrics and gynaecology will benefit from several related groups already working within the Cochrane Collaboration, and it is hoped that the ‘wooden spoon’ can be discarded from the authors' ranks for good.
Abstract: Summary In the current era of patients seeking better information, managers seeking cost-effective treatments, clinicians struggling to keep up with the expanding medical literature, and professional groups requiring continuing medical education, there is a clear need for up-to-date and relevant systematic reviews of the effectiveness of treatment within our specialty. Such reviews will play an increasing role in the education of health professionals and lay people, in the evolution of the health service and in the direction of future research. The Cochrane Collaboration provides the infrastructure for the development and dissemination of these reviews. The specialty of obstetrics and gynaecology will benefit from several related groups already working within the Cochrane Collaboration (Pregnancy and Childbirth, Subfertility, Menstrual Disorders and Incontinence). Other groups are in the process of, or likely to, register in the near future (Fertility Control, Gynaecological Cancer). However, the need and demand for a large number of systematic reviews exceeds the current capacity of those who have committed themselves to prepare and maintain such reviews, and substantial challenges remain. However, there is every reason to believe that a concerted effort over many years will be worth while. Earlier in this commentary, obstetrics and gynaecology was referred to as the specialty most deserving of the ‘wooden spoon’ for its lack of evidence-based practice. With the development of various gynaecological groups within the Collaboration, we hope that the ‘wooden spoon’ can be discarded from our ranks for good.

2,561 citations

Related Papers (5)