scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Evaluating the effectiveness of a tailored multifaceted performance feedback intervention to improve the quality of care : Protocol for a cluster randomized trial in intensive care

TL;DR: This study will conduct a study to assess the impact of the InFoQI program on patient outcome and organizational process measures of care, and to gain insight into barriers and success factors that affected the program's impact.
Abstract: Background: Feedback is potentially effective in improving the quality of care. However, merely sending reports is no guarantee that performance data are used as input for systematic quality improvement (QI). Therefore, we developed a multifaceted intervention tailored to prospectively analyzed barriers to using indicators: the Information Feedback on Quality Indicators (InFoQI) program. This program aims to promote the use of performance indicator data as input for local systematic QI. We will conduct a study to assess the impact of the InFoQI program on patient outcome and organizational process measures of care, and to gain insight into barriers and success factors that affected the program’s impact. The study will be executed in the context of intensive care. This paper presents the study’s protocol. Methods/design: We will conduct a cluster randomized controlled trial with intensive care units (ICUs) in the Netherlands. We will include ICUs that submit indicator data to the Dutch National Intensive Care Evaluation (NICE) quality registry and that agree to allocate at least one intensivist and one ICU nurse for implementation of the intervention. Eligible ICUs (clusters) will be randomized to receive basic NICE registry feedback (control arm) or to participate in the InFoQI program (intervention arm). The InFoQI program consists of comprehensive feedback, establishing a local, multidisciplinary QI team, and educational outreach visits. The primary outcome measures will be length of ICU stay and the proportion of shifts with a bed occupancy rate above 80%. We will also conduct a process evaluation involving ICUs in the intervention arm to investigate their actual exposure to and experiences with the InFoQI program. Discussion: The results of this study will inform those involved in providing ICU care on the feasibility of a tailored multifaceted performance feedback intervention and its ability to accelerate systematic and local quality improvement. Although our study will be conducted within the domain of intensive care, we believe our conclusions will be generalizable to other settings that have a quality registry including an indicator set available.

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
TL;DR: It is suggested that a multifaceted feedback program stimulates clinicians to use indicators as input for QI, and is a promising first step to integrating systematic QI in daily care.
Abstract: Background In multisite trials evaluating a complex quality improvement (QI) strategy the ‘same’ intervention may be implemented and adopted in different ways. Therefore, in this study we investigated the exposure to and experiences with a multifaceted intervention aimed at improving the quality of intensive care, and explore potential explanations for why the intervention was effective or not. Methods We conducted a process evaluation investigating the effect of a multifaceted improvement intervention including establishment of a local multidisciplinary QI team, educational outreach visits and periodical indicator feedback on performance measures such as intensive care unit length of stay, mechanical ventilation duration and glucose regulation. Data were collected among participants receiving the intervention. We used standardised forms to record time investment and a questionnaire and focus group to collect data on perceived barriers and satisfaction. Results The monthly time invested per QI team member ranged from 0.6 to 8.1 h. Persistent problems were: not sharing feedback with other staff; lack of normative standards and benchmarks; inadequate case-mix adjustment; lack of knowledge on how to apply the intervention for QI; and insufficient allocated time and staff. The intervention effectively targeted the lack of trust in data quality, and was reported to motivate participants to use indicators for QI activities. Conclusions Time and resource constraints, difficulties to translate feedback into effective actions and insufficient involvement of other staff members hampered the impact of the intervention. However, our study suggests that a multifaceted feedback program stimulates clinicians to use indicators as input for QI, and is a promising first step to integrating systematic QI in daily care.

36 citations

Journal ArticleDOI
TL;DR: This paper describes some of the quality improvement techniques used in medicine, which include selecting the 'package' of clinical actions to implement, identifying subsidiary actions to achieve the improvement aim, designing the implementation strategy and ways to incentivise QI.
Abstract: Modern medicine is complex and delivered by interdependent teams. Conscious redesign of the way in which these teams interact can contribute to improving the quality of care by reducing practice variation. This requires techniques that are different to those used for individual patient care. In this paper, we describe some of these quality improvement (QI) techniques. The first section deals with the identification of practice variation as the starting point of a systematic QI endeavour. This involves collecting data in multiple centres on a set of quality indicators as well as on case-mix variables that are thought to affect those indicators. Reporting the collected indicator data in longitudinal run charts supports teams in monitoring the effect of their QI effort. After identifying the opportunities for improvement, the second section discusses how to reduce practice variation. This includes selecting the ‘package’ of clinical actions to implement, identifying subsidiary actions to achieve the improvement aim, designing the implementation strategy and ways to incentivise QI.

34 citations

Journal ArticleDOI
TL;DR: How unit-specific dashboards are being used to monitor performance and drive quality improvement efforts from the perspectives of nurses and unit managers in Toronto, Canada is highlighted.
Abstract: Introduction Performance data can be used to monitor and guide interventions aimed at improving the quality and safety of patient care. To use performance data effectively, nurses need to understand how to interpret and use data in meaningful ways to guide practice. Dashboards are interactive computerised tools that display performance data. In one large, urban teaching hospital in Toronto, Canada, unit-specific dashboards were implemented across the organisation. Methods A qualitative study was undertaken to explore the perceptions and experiences of front-line nurses and managers associated with the implementation of a unit-level dashboard. Six units were selected to participate in the study. Data were analysed using a directed content analysis approach. Results The sample included 56 study participants, including 51 front-line nurses and 5 unit managers. Three key themes emerged around nurses’ and unit managers’ perspectives on the implementation of unit-specific dashboards. Nurses and managers described that the Care Utilising Evidence dashboard was a visual tool that displayed data on the impact of the nursing care provided to patients. This tool also was used by the nurses and managers to keep track of processes of care and patient outcomes and experiences at a unit level. Further, nurses were able to use performance data to identify quality care improvements specific to their unit. Conclusions The results highlight how unit-specific dashboards are being used to monitor performance and drive quality improvement efforts from the perspectives of nurses and unit managers. In practice, nurse leaders may consider investing in dashboards as a quality improvement strategy to optimise the use of performance data at their organisations.

33 citations

Journal ArticleDOI
TL;DR: Audit and feedback helps health professionals to work on aspects for which improvement is recommended, and the limited effects typically found by audit and feedback studies are likely predominantly caused by barriers to translation of intentions into actual change in clinical practice.
Abstract: Audit and feedback aims to guide health professionals in improving aspects of their practice that need it most Evidence suggests that feedback fails to increase accuracy of professional perceptions about clinical performance, which likely reduces audit and feedback effectiveness This study investigates health professionals’ perceptions about their clinical performance and the influence of feedback on their intentions to change practice We conducted an online laboratory experiment guided by Control Theory with 72 intensive care professionals from 21 units For each of four new pain management indicators, we collected professionals’ perceptions about their clinical performance; peer performance; targets; and improvement intentions before and after receiving first-time feedback An electronic audit and feedback dashboard provided ICU’s own performance, median and top 10% peer performance, and improvement recommendations The experiment took place approximately 1 month before units enrolled into a cluster-randomised trial assessing the impact of adding a toolbox with suggested actions and materials to improve intensive care pain management During the experiment, the toolbox was inaccessible; all participants accessed the same version of the dashboard We analysed 288 observations In 538%, intensive care professionals overestimated their clinical performance; but in only 135%, they underestimated it On average, performance was overestimated by 229% (on a 0–100% scale) Professionals similarly overestimated peer performance, and set targets 203% higher than the top performance benchmarks In 684% of cases, intentions to improve practice were consistent with actual gaps in performance, even before professionals had received feedback; which increased to 799% after receiving feedback (odds ratio, 241; 95% CI, 153 to 378) However, in 563% of cases, professionals still wanted to improve care aspects at which they were already top performers Alternatively, in 83% of cases, they lacked improvement intentions because they did not consider indicators important; did not trust the data; or deemed benchmarks unrealistic Audit and feedback helps health professionals to work on aspects for which improvement is recommended Given the abundance of professionals’ prior good improvement intentions, the limited effects typically found by audit and feedback studies are likely predominantly caused by barriers to translation of intentions into actual change in clinical practice ClinicalTrialsgov NCT02922101 Registered 26 September 2016

32 citations

Journal ArticleDOI
TL;DR: In the context of ICUs participating in a national registry, applying a multifaceted activating performance feedback strategy did not lead to better patient outcomes than only receiving periodical registry reports.
Abstract: OBJECTIVE:: To assess the impact of applying a multifaceted activating performance feedback strategy on intensive care patient outcomes compared with passively receiving benchmark reports. DESIGN:: The Information Feedback on Quality Indicators study was a cluster randomized trial, running from February 2009 to May 2011. SETTING:: Thirty Dutch closed-format ICUs that participated in the national registry. Study duration per ICU was sixteen months. PATIENTS:: We analyzed data on 25,552 admissions. Admissions after coronary artery bypass graft surgery were excluded. INTERVENTION:: The intervention aimed to activate ICUs to undertake quality improvement initiatives by formalizing local responsibility for acting on performance feedback, and supporting them with increasing the impact of their improvement efforts. Therefore, intervention ICUs established a local, multidisciplinary quality improvement team. During one year, this team received two educational outreach visits, monthly reports to monitor performance over time, and extended, quarterly benchmark reports. Control ICUs only received four standard quarterly benchmark reports. MEASUREMENTS AND RESULTS:: The extent to which the intervention was implemented in daily practice varied considerably among intervention ICUs: the average monthly time investment per quality improvement team member was 4.1 hours (SD, 2.3; range, 0.6-8.1); the average number of monthly meetings per quality improvement team was 5.7 (SD, 4.5; range, 0-12). ICU length of stay did not significantly reduce after 1 year in intervention units compared with controls (hazard ratio, 1.02 [95% CI, 0.92-1.12]). Furthermore, the strategy had no statistically significant impact on any of the secondary measures (duration of mechanical ventilation, proportion of out-of-range glucose measurements, and all-cause hospital mortality). CONCLUSIONS:: In the context of ICUs participating in a national registry, applying a multifaceted activating performance feedback strategy did not lead to better patient outcomes than only receiving periodical registry reports.

29 citations

References
More filters
Journal ArticleDOI
TL;DR: A class of generalized estimating equations (GEEs) for the regression parameters is proposed, extensions of those used in quasi-likelihood methods which have solutions which are consistent and asymptotically Gaussian even when the time dependence is misspecified as the authors often expect.
Abstract: Longitudinal data sets are comprised of repeated observations of an outcome and a set of covariates for each of many subjects. One objective of statistical analysis is to describe the marginal expectation of the outcome variable as a function of the covariates while accounting for the correlation among the repeated observations for a given subject. This paper proposes a unifying approach to such analysis for a variety of discrete and continuous outcomes. A class of generalized estimating equations (GEEs) for the regression parameters is proposed. The equations are extensions of those used in quasi-likelihood (Wedderburn, 1974, Biometrika 61, 439-447) methods. The GEEs have solutions which are consistent and asymptotically Gaussian even when the time dependence is misspecified as we often expect. A consistent variance estimate is presented. We illustrate the use of the GEE approach with longitudinal data from a study of the effect of mothers' stress on children's morbidity.

7,080 citations


"Evaluating the effectiveness of a t..." refers methods in this paper

  • ...To account for potential correlation of outcomes within ICUs, we will use generalized estimation equations with exchangeable correlation [34-36]....

    [...]

Journal ArticleDOI
24 Mar 2010-BMJ
TL;DR: This update of the CONSORT statement improves the wording and clarity of the previous checklist and incorporates recommendations related to topics that have only recently received recognition, such as selective outcome reporting bias.
Abstract: Overwhelming evidence shows the quality of reporting of randomised controlled trials (RCTs) is not optimal. Without transparent reporting, readers cannot judge the reliability and validity of trial findings nor extract information for systematic reviews. Recent methodological analyses indicate that inadequate reporting and design are associated with biased estimates of treatment effects. Such systematic error is seriously damaging to RCTs, which are considered the gold standard for evaluating interventions because of their ability to minimise or avoid bias. A group of scientists and editors developed the CONSORT (Consolidated Standards of Reporting Trials) statement to improve the quality of reporting of RCTs. It was first published in 1996 and updated in 2001. The statement consists of a checklist and flow diagram that authors can use for reporting an RCT. Many leading medical journals and major international editorial groups have endorsed the CONSORT statement. The statement facilitates critical appraisal and interpretation of RCTs. During the 2001 CONSORT revision, it became clear that explanation and elaboration of the principles underlying the CONSORT statement would help investigators and others to write or appraise trial reports. A CONSORT explanation and elaboration article was published in 2001 alongside the 2001 version of the CONSORT statement. After an expert meeting in January 2007, the CONSORT statement has been further revised and is published as the CONSORT 2010 Statement. This update improves the wording and clarity of the previous checklist and incorporates recommendations related to topics that have only recently received recognition, such as selective outcome reporting bias. This explanatory and elaboration document-intended to enhance the use, understanding, and dissemination of the CONSORT statement-has also been extensively revised. It presents the meaning and rationale for each new and updated checklist item providing examples of good reporting and, where possible, references to relevant empirical studies. Several examples of flow diagrams are included. The CONSORT 2010 Statement, this revised explanatory and elaboration document, and the associated website (www.consort-statement.org) should be helpful resources to improve reporting of randomised trials.

5,957 citations

Journal ArticleDOI
TL;DR: A review of the literature on quality assessment of medical care can be found in this article, where the authors focus almost exclusively on the evaluation of the medical care process at the level of physician-patient interaction.
Abstract: This p aper i s a n a ttempt t o d escribe a nd evaluate current methods for assessing the quality of medical care and to suggest some directions for further study. It is concerned with methods rather than findings, and with an evaluation of methodology in general, rather than a detailed critique of methods in specific studies. This is not an exhaustive review of the pertinent literature. Certain key studies, of course, have been included. Other papers have been selected only as illustrative examples. Those omitted are not, for that reason, less worthy of note. This paper deals almost exclusively with the evaluation of the medical care process at the level of physician-patient interaction. It excludes, therefore, processes primarily related to the effective delivery of medical care at the community level. Moreover, this paper is not concerned with the administrative aspects of quality control. Many of the studies reviewed here have arisen out of the urgent need to evaluate and control the quality of care in organized programs of medical care. Nevertheless, these studies will be discussed only in terms of their contribution to methods of assessment and not in terms of their broader social goals. The author has remained, by and large, in the familiar territory of care provided by physicians and has avoided incursions into other types of

5,020 citations

Journal ArticleDOI
TL;DR: The results indicated that feedback may be more effective when baseline performance is low, the source is a supervisor or colleague, it is provided more than once, and the role of context and the targeted clinical behaviour was assessed.
Abstract: Background Audit and feedback continues to be widely used as a strategy to improve professional practice. It appears logical that healthcare professionals would be prompted to modify their practice if given feedback that their clinical practice was inconsistent with that of their peers or accepted guidelines. Yet, audit and feedback has not been found to be consistently effective. Objectives To assess the effects of audit and feedback on the practice of healthcare professionals and patient outcomes. Search strategy We searched the Cochrane Effective Practice and Organisation of Care Group's register up to January 2001. This was supplemented with searches of MEDLINE and reference lists, which did not yield additional relevant studies. Selection criteria Randomised trials of audit and feedback (defined as any summary of clinical performance over a specified period of time) that reported objectively measured professional practice in a healthcare setting or healthcare outcomes. Data collection and analysis Two reviewers independently extracted data and assessed study quality. Quantitative (meta-regression), visual and qualitative analyses were undertaken. Main results We included 85 studies, 48 of which have been added to the previous version of this review. There were 52 comparisons of dichotomous outcomes from 47 trials with over 3500 health professionals that compared audit and feedback to no intervention. The adjusted RDs of non-compliance with desired practice varied from 0.09 (a 9% absolute increase in non-compliance) to 0.71 (a 71% decrease in non-compliance) (median = 0.07, inter-quartile range = 0.02 to 0.11). The one factor that appeared to predict the effectiveness of audit and feedback across studies was baseline non-compliance with recommended practice. Reviewer's conclusions Audit and feedback can be effective in improving professional practice. When it is effective, the effects are generally small to moderate. The absolute effects of audit and feedback are more likely to be larger when baseline adherence to recommended practice is low.

4,946 citations


"Evaluating the effectiveness of a t..." refers background in this paper

  • ...Although feedback is potentially effective in improving the quality of care [14-16], merely sending feedback reports is no guarantee that performance data are used as input for systematic quality improvement (QI)....

    [...]

  • ...However, the number of studies comparing the effect of feedback alone with the effect of feedback combined with other strategies was limited and relatively few evaluations regarded the ICU domain [14,42]....

    [...]

  • ...The effectiveness of feedback as a QI strategy has often been evaluated, as indicated by the large number of included studies in systematic reviews on this subject [14,15]....

    [...]

Journal ArticleDOI
18 Mar 2004-BMJ
TL;DR: This paper provides updated and extended guidance, based on the 2010 version of the CONSORT statement and the 2008consORT statement for the reporting of abstracts, on how to report the results of cluster randomised controlled trials.
Abstract: The Consolidated Standards of Reporting Trials (CONSORT) statement was developed to improve the reporting of randomised controlled trials. It was initially published in 1996 and focused on the reporting of parallel group randomised controlled trials. The statement was revised in 2001, with a further update in 2010. A separate CONSORT statement for the reporting of abstracts was published in 2008. In earlier papers we considered the implications of the 2001 version of the CONSORT statement for the reporting of cluster randomised trial. In this paper we provide updated and extended guidance, based on the 2010 version of the CONSORT statement and the 2008 CONSORT statement for the reporting of abstracts.

2,655 citations