scispace - formally typeset
Search or ask a question

Showing papers in "Implementation Science in 2017"


Journal ArticleDOI
TL;DR: This guide offers practical guidance for those who wish to apply the Theoretical Domains Framework to assess implementation problems and support intervention design, and provides a brief rationale for using a theoretical approach to investigate and address implementation problems.
Abstract: Implementing new practices requires changes in the behaviour of relevant actors, and this is facilitated by understanding of the determinants of current and desired behaviours. The Theoretical Domains Framework (TDF) was developed by a collaboration of behavioural scientists and implementation researchers who identified theories relevant to implementation and grouped constructs from these theories into domains. The collaboration aimed to provide a comprehensive, theory-informed approach to identify determinants of behaviour. The first version was published in 2005, and a subsequent version following a validation exercise was published in 2012. This guide offers practical guidance for those who wish to apply the TDF to assess implementation problems and support intervention design. It presents a brief rationale for using a theoretical approach to investigate and address implementation problems, summarises the TDF and its development, and describes how to apply the TDF to achieve implementation objectives. Examples from the implementation research literature are presented to illustrate relevant methods and practical considerations. Researchers from Canada, the UK and Australia attended a 3-day meeting in December 2012 to build an international collaboration among researchers and decision-makers interested in the advancing use of the TDF. The participants were experienced in using the TDF to assess implementation problems, design interventions, and/or understand change processes. This guide is an output of the meeting and also draws on the authors’ collective experience. Examples from the implementation research literature judged by authors to be representative of specific applications of the TDF are included in this guide. We explain and illustrate methods, with a focus on qualitative approaches, for selecting and specifying target behaviours key to implementation, selecting the study design, deciding the sampling strategy, developing study materials, collecting and analysing data, and reporting findings of TDF-based studies. Areas for development include methods for triangulating data, e.g. from interviews, questionnaires and observation and methods for designing interventions based on TDF-based problem analysis. We offer this guide to the implementation community to assist in the application of the TDF to achieve implementation objectives. Benefits of using the TDF include the provision of a theoretical basis for implementation studies, good coverage of potential reasons for slow diffusion of evidence into practice and a method for progressing from theory-based investigation to intervention.

1,522 citations


Journal ArticleDOI
TL;DR: Three new measures: the Acceptability of Intervention Measure (AIM), Intervention Appropriateness Measure (IAM), and Feasibility of intervention Measure (FIM) are developed and psychometrically assessed and demonstrate promising psychometric properties.
Abstract: Implementation outcome measures are essential for monitoring and evaluating the success of implementation efforts. Yet, currently available measures lack conceptual clarity and have largely unknown reliability and validity. This study developed and psychometrically assessed three new measures: the Acceptability of Intervention Measure (AIM), Intervention Appropriateness Measure (IAM), and Feasibility of Intervention Measure (FIM). Thirty-six implementation scientists and 27 mental health professionals assigned 31 items to the constructs and rated their confidence in their assignments. The Wilcoxon one-sample signed rank test was used to assess substantive and discriminant content validity. Exploratory and confirmatory factor analysis (EFA and CFA) and Cronbach alphas were used to assess the validity of the conceptual model. Three hundred twenty-six mental health counselors read one of six randomly assigned vignettes depicting a therapist contemplating adopting an evidence-based practice (EBP). Participants used 15 items to rate the therapist’s perceptions of the acceptability, appropriateness, and feasibility of adopting the EBP. CFA and Cronbach alphas were used to refine the scales, assess structural validity, and assess reliability. Analysis of variance (ANOVA) was used to assess known-groups validity. Finally, half of the counselors were randomly assigned to receive the same vignette and the other half the opposite vignette; and all were asked to re-rate acceptability, appropriateness, and feasibility. Pearson correlation coefficients were used to assess test-retest reliability and linear regression to assess sensitivity to change. All but five items exhibited substantive and discriminant content validity. A trimmed CFA with five items per construct exhibited acceptable model fit (CFI = 0.98, RMSEA = 0.08) and high factor loadings (0.79 to 0.94). The alphas for 5-item scales were between 0.87 and 0.89. Scale refinement based on measure-specific CFAs and Cronbach alphas using vignette data produced 4-item scales (α’s from 0.85 to 0.91). A three-factor CFA exhibited acceptable fit (CFI = 0.96, RMSEA = 0.08) and high factor loadings (0.75 to 0.89), indicating structural validity. ANOVA showed significant main effects, indicating known-groups validity. Test-retest reliability coefficients ranged from 0.73 to 0.88. Regression analysis indicated each measure was sensitive to change in both directions. The AIM, IAM, and FIM demonstrate promising psychometric properties. Predictive validity assessment is planned.

736 citations


Journal ArticleDOI
TL;DR: The CICI framework addresses and graphically presents context, implementation and setting in an integrated way and aims at simplifying and structuring complexity in order to advance the understanding of whether and how interventions work.
Abstract: The effectiveness of complex interventions, as well as their success in reaching relevant populations, is critically influenced by their implementation in a given context. Current conceptual frameworks often fail to address context and implementation in an integrated way and, where addressed, they tend to focus on organisational context and are mostly concerned with specific health fields. Our objective was to develop a framework to facilitate the structured and comprehensive conceptualisation and assessment of context and implementation of complex interventions. The Context and Implementation of Complex Interventions (CICI) framework was developed in an iterative manner and underwent extensive application. An initial framework based on a scoping review was tested in rapid assessments, revealing inconsistencies with respect to the underlying concepts. Thus, pragmatic utility concept analysis was undertaken to advance the concepts of context and implementation. Based on these findings, the framework was revised and applied in several systematic reviews, one health technology assessment (HTA) and one applicability assessment of very different complex interventions. Lessons learnt from these applications and from peer review were incorporated, resulting in the CICI framework. The CICI framework comprises three dimensions—context, implementation and setting—which interact with one another and with the intervention dimension. Context comprises seven domains (i.e., geographical, epidemiological, socio-cultural, socio-economic, ethical, legal, political); implementation consists of five domains (i.e., implementation theory, process, strategies, agents and outcomes); setting refers to the specific physical location, in which the intervention is put into practise. The intervention and the way it is implemented in a given setting and context can occur on a micro, meso and macro level. Tools to operationalise the framework comprise a checklist, data extraction tools for qualitative and quantitative reviews and a consultation guide for applicability assessments. The CICI framework addresses and graphically presents context, implementation and setting in an integrated way. It aims at simplifying and structuring complexity in order to advance our understanding of whether and how interventions work. The framework can be applied in systematic reviews and HTA as well as primary research and facilitate communication among teams of researchers and with various stakeholders.

520 citations


Journal ArticleDOI
TL;DR: An approach to using the Consolidated Framework for Implementation Research (CFIR) to guide systematic research that supports rapid-cycle evaluation of the implementation of health care delivery interventions and produces actionable evaluation findings intended to improve implementation in a timely manner is presented.
Abstract: Much research does not address the practical needs of stakeholders responsible for introducing health care delivery interventions into organizations working to achieve better outcomes. In this article, we present an approach to using the Consolidated Framework for Implementation Research (CFIR) to guide systematic research that supports rapid-cycle evaluation of the implementation of health care delivery interventions and produces actionable evaluation findings intended to improve implementation in a timely manner. To present our approach, we describe a formative cross-case qualitative investigation of 21 primary care practices participating in the Comprehensive Primary Care (CPC) initiative, a multi-payer supported primary care practice transformation intervention led by the Centers for Medicare and Medicaid Services. Qualitative data include observational field notes and semi-structured interviews with primary care practice leadership, clinicians, and administrative and medical support staff. We use intervention-specific codes, and CFIR constructs to reduce and organize the data to support cross-case analysis of patterns of barriers and facilitators relating to different CPC components. Using the CFIR to guide data collection, coding, analysis, and reporting of findings supported a systematic, comprehensive, and timely understanding of barriers and facilitators to practice transformation. Our approach to using the CFIR produced actionable findings for improving implementation effectiveness during this initiative and for identifying improvements to implementation strategies for future practice transformation efforts. The CFIR is a useful tool for guiding rapid-cycle evaluation of the implementation of practice transformation initiatives. Using the approach described here, we systematically identified where adjustments and refinements to the intervention could be made in the second year of the 4-year intervention. We think the approach we describe has broad application and encourage others to use the CFIR, along with intervention-specific codes, to guide the efficient and rigorous analysis of rich qualitative data. NCT02318108

372 citations


Journal ArticleDOI
TL;DR: A comprehensive definition of sustainability was developed based on definitions already used in the literature to identify five key sustainability constructs, which can be used as the basis for future research on sustainability.
Abstract: Understanding sustainability is one of the significant implementation science challenges. One of the big challenges in researching sustainability is the lack of consistent definitions in the literature. Most implementation studies do not present a definition of sustainability, even when assessing sustainability. The aim of the current study was to systematically develop a comprehensive definition of sustainability based on definitions already used in the literature. We searched for knowledge syntheses of sustainability and abstracted sustainability definitions from the articles identified through any relevant systematic and scoping reviews. The constructs in the abstracted sustainability definitions were mapped to an existing definition. The comprehensive definition of sustainability was revised to include emerging constructs. We identified four knowledge syntheses of sustainability, which identified 209 original articles. Of the 209 articles, 24 (11.5%) included a definition of sustainability. These definitions were mapped to three constructs from an existing definition, and nine new constructs emerged. We reviewed all constructs and created a revised definition: (1) after a defined period of time, (2) a program, clinical intervention, and/or implementation strategies continue to be delivered and/or (3) individual behavior change (i.e., clinician, patient) is maintained; (4) the program and individual behavior change may evolve or adapt while (5) continuing to produce benefits for individuals/systems. All 24 definitions were remapped to the comprehensive definition (percent agreement among three coders was 94%). Of the 24 definitions, 17 described the continued delivery of a program (70.8%), 17 mentioned continued outcomes (70.8%), 13 mentioned time (54.2%), 8 addressed the individual maintenance of a behavior change (33.3%), and 6 described the evolution or adaptation (25.0%). We drew from over 200 studies to identify 24 existing definitions of sustainability. Based on these definitions, we identified five key sustainability constructs, which can be used as the basis for future research on sustainability. Our next step is to identify sustainability frameworks and develop a meta-framework using a concept mapping approach to consolidate the factors and considerations across sustainability frameworks.

288 citations


Journal ArticleDOI
TL;DR: A system for classifying implementation strategies that builds on Proctor and colleagues’ (2013) reporting guidelines, which recommend that authors not only name and define their implementation strategies but also specify who enacted the strategy and the level and determinants that were targeted, is offered.
Abstract: Strategies are central to the National Institutes of Health’s definition of implementation research as “the study of strategies to integrate evidence-based interventions into specific settings.” Multiple scholars have proposed lists of the strategies used in implementation research and practice, which they increasingly are classifying under the single term “implementation strategies.” We contend that classifying all strategies under a single term leads to confusion, impedes synthesis across studies, and limits advancement of the full range of strategies of importance to implementation. To address this concern, we offer a system for classifying implementation strategies that builds on Proctor and colleagues’ (2013) reporting guidelines, which recommend that authors not only name and define their implementation strategies but also specify who enacted the strategy (i.e., the actor) and the level and determinants that were targeted (i.e., the action targets). We build on Wandersman and colleagues’ Interactive Systems Framework to distinguish strategies based on whether they are enacted by actors functioning as part of a Delivery, Support, or Synthesis and Translation System. We build on Damschroder and colleague’s Consolidated Framework for Implementation Research to distinguish the levels that strategies target (intervention, inner setting, outer setting, individual, and process). We then draw on numerous resources to identify determinants, which are conceptualized as modifiable factors that prevent or enable the adoption and implementation of evidence-based interventions. Identifying actors and targets resulted in five conceptually distinct classes of implementation strategies: dissemination, implementation process, integration, capacity-building, and scale-up. In our descriptions of each class, we identify the level of the Interactive System Framework at which the strategy is enacted (actors), level and determinants targeted (action targets), and outcomes used to assess strategy effectiveness. We illustrate how each class would apply to efforts to improve colorectal cancer screening rates in Federally Qualified Health Centers. Structuring strategies into classes will aid reporting of implementation research findings, alignment of strategies with relevant theories, synthesis of findings across studies, and identification of potential gaps in current strategy listings. Organizing strategies into classes also will assist users in locating the strategies that best match their needs.

287 citations


Journal ArticleDOI
TL;DR: It is argued that while CBPR and IKT both have the potential to contribute evidence to implementation science and practices for collaborative research, clarity for the purpose of the research—social change or application—is a critical feature in the selection of an appropriate collaborative approach to build knowledge.
Abstract: Better use of research evidence (one form of “knowledge”) in health systems requires partnerships between researchers and those who contend with the real-world needs and constraints of health systems. Community-based participatory research (CBPR) and integrated knowledge translation (IKT) are research approaches that emphasize the importance of creating partnerships between researchers and the people for whom the research is ultimately meant to be of use (“knowledge users”). There exist poor understandings of the ways in which these approaches converge and diverge. Better understanding of the similarities and differences between CBPR and IKT will enable researchers to use these approaches appropriately and to leverage best practices and knowledge from each. The co-creation of knowledge conveys promise of significant social impacts, and further understandings of how to engage and involve knowledge users in research are needed. We examine the histories and traditions of CBPR and IKT, as well as their points of convergence and divergence. We critically evaluate the ways in which both have the potential to contribute to the development and integration of knowledge in health systems. As distinct research traditions, the underlying drivers and rationale for CBPR and IKT have similarities and differences across the areas of motivation, social location, and ethics; nevertheless, the practices of CBPR and IKT converge upon a common aim: the co-creation of knowledge that is the result of knowledge user and researcher expertise. We argue that while CBPR and IKT both have the potential to contribute evidence to implementation science and practices for collaborative research, clarity for the purpose of the research—social change or application—is a critical feature in the selection of an appropriate collaborative approach to build knowledge. CBPR and IKT bring distinct strengths to a common aim: to foster democratic processes in the co-creation of knowledge. As research approaches, they create opportunities to challenge assumptions about for whom, how, and what is defined as knowledge, and to develop and integrate research findings into health systems. When used appropriately, CBPR and IKT both have the potential to contribute to and advance implementation science about the conduct of collaborative health systems research.

274 citations


Journal ArticleDOI
TL;DR: The results suggest that the selection of implementation theories is often haphazard or driven by convenience or prior exposure, and variation in approaches to selecting theory warn against prescriptive guidance for theory selection.
Abstract: Theories provide a synthesizing architecture for implementation science. The underuse, superficial use, and misuse of theories pose a substantial scientific challenge for implementation science and may relate to challenges in selecting from the many theories in the field. Implementation scientists may benefit from guidance for selecting a theory for a specific study or project. Understanding how implementation scientists select theories will help inform efforts to develop such guidance. Our objective was to identify which theories implementation scientists use, how they use theories, and the criteria used to select theories. We identified initial lists of uses and criteria for selecting implementation theories based on seminal articles and an iterative consensus process. We incorporated these lists into a self-administered survey for completion by self-identified implementation scientists. We recruited potential respondents at the 8th Annual Conference on the Science of Dissemination and Implementation in Health and via several international email lists. We used frequencies and percentages to report results. Two hundred twenty-three implementation scientists from 12 countries responded to the survey. They reported using more than 100 different theories spanning several disciplines. Respondents reported using theories primarily to identify implementation determinants, inform data collection, enhance conceptual clarity, and guide implementation planning. Of the 19 criteria presented in the survey, the criteria used by the most respondents to select theory included analytic level (58%), logical consistency/plausibility (56%), empirical support (53%), and description of a change process (54%). The criteria used by the fewest respondents included fecundity (10%), uniqueness (12%), and falsifiability (15%). Implementation scientists use a large number of criteria to select theories, but there is little consensus on which are most important. Our results suggest that the selection of implementation theories is often haphazard or driven by convenience or prior exposure. Variation in approaches to selecting theory warn against prescriptive guidance for theory selection. Instead, implementation scientists may benefit from considering the criteria that we propose in this paper and using them to justify their theory selection. Future research should seek to refine the criteria for theory selection to promote more consistent and appropriate use of theory in implementation science.

262 citations


Journal ArticleDOI
TL;DR: It is argued that implementation of an EBI in a moderately different setting or with a different population can sometimes “borrow strength” from evidence of impact in a prior effectiveness trial.
Abstract: Implementing treatments and interventions with demonstrated effectiveness is critical for improving patient health outcomes at a reduced cost. When an evidence-based intervention (EBI) is implemented with fidelity in a setting that is very similar to the setting wherein it was previously found to be effective, it is reasonable to anticipate similar benefits of that EBI. However, one goal of implementation science is to expand the use of EBIs as broadly as is feasible and appropriate in order to foster the greatest public health impact. When implementing an EBI in a novel setting, or targeting novel populations, one must consider whether there is sufficient justification that the EBI would have similar benefits to those found in earlier trials. In this paper, we introduce a new concept for implementation called “scaling-out” when EBIs are adapted either to new populations or new delivery systems, or both. Using existing external validity theories and multilevel mediation modeling, we provide a logical framework for determining what new empirical evidence is required for an intervention to retain its evidence-based standard in this new context. The motivating questions are whether scale-out can reasonably be expected to produce population-level effectiveness as found in previous studies, and what additional empirical evaluations would be necessary to test for this short of an entirely new effectiveness trial. We present evaluation options for assessing whether scaling-out results in the ultimate health outcome of interest. In scaling to health or service delivery systems or population/community contexts that are different from the setting where the EBI was originally tested, there are situations where a shorter timeframe of translation is possible. We argue that implementation of an EBI in a moderately different setting or with a different population can sometimes “borrow strength” from evidence of impact in a prior effectiveness trial. The collection of additional empirical data is deemed necessary by the nature and degree of adaptations to the EBI and the context. Our argument in this paper is conceptual, and we propose formal empirical tests of mediational equivalence in a follow-up paper.

225 citations


Journal ArticleDOI
TL;DR: The Human Behaviour-Change Project will use Artificial Intelligence and Machine Learning to develop and evaluate a ‘Knowledge System’ that automatically extracts, synthesises and interprets findings from BCI evaluation reports to generate new insights about behaviour change and improve prediction of intervention effectiveness.
Abstract: The project is funded by a Wellcome Trust collaborative award [The Human Behaviour-Change Project: Building the science of behaviour change for complex intervention development’, 201,524/Z/16/Z]. During the preparation of the manuscript RW’s salary was funded by Cancer Research UK.

203 citations


Journal ArticleDOI
TL;DR: A systematic review of studies that mentioned both the CFIR and the TDF, were written in English, were peer-reviewed, and reported either a protocol or results of an empirical study in MEDLINE/PubMed, PsycInfo, Web of Science, or Google Scholar was undertaken.
Abstract: Over 60 implementation frameworks exist. Using multiple frameworks may help researchers to address multiple study purposes, levels, and degrees of theoretical heritage and operationalizability; however, using multiple frameworks may result in unnecessary complexity and redundancy if doing so does not address study needs. The Consolidated Framework for Implementation Research (CFIR) and the Theoretical Domains Framework (TDF) are both well-operationalized, multi-level implementation determinant frameworks derived from theory. As such, the rationale for using the frameworks in combination (i.e., CFIR + TDF) is unclear. The objective of this systematic review was to elucidate the rationale for using CFIR + TDF by (1) describing studies that have used CFIR + TDF, (2) how they used CFIR + TDF, and (2) their stated rationale for using CFIR + TDF. We undertook a systematic review to identify studies that mentioned both the CFIR and the TDF, were written in English, were peer-reviewed, and reported either a protocol or results of an empirical study in MEDLINE/PubMed, PsycInfo, Web of Science, or Google Scholar. We then abstracted data into a matrix and analyzed it qualitatively, identifying salient themes. We identified five protocols and seven completed studies that used CFIR + TDF. CFIR + TDF was applied to studies in several countries, to a range of healthcare interventions, and at multiple intervention phases; used many designs, methods, and units of analysis; and assessed a variety of outcomes. Three studies indicated that using CFIR + TDF addressed multiple study purposes. Six studies indicated that using CFIR + TDF addressed multiple conceptual levels. Four studies did not explicitly state their rationale for using CFIR + TDF. Differences in the purposes that authors of the CFIR (e.g., comprehensive set of implementation determinants) and the TDF (e.g., intervention development) propose help to justify the use of CFIR + TDF. Given that the CFIR and the TDF are both multi-level frameworks, the rationale that using CFIR + TDF is needed to address multiple conceptual levels may reflect potentially misleading conventional wisdom. On the other hand, using CFIR + TDF may more fully define the multi-level nature of implementation. To avoid concerns about unnecessary complexity and redundancy, scholars who use CFIR + TDF and combinations of other frameworks should specify how the frameworks contribute to their study. PROSPERO CRD42015027615

Journal ArticleDOI
TL;DR: There is an agreement across methods of four tasks that need to be completed when designing individual-level interventions: identifying barriers, selecting intervention components, using theory, and engaging end-users.
Abstract: Systematic reviews consistently indicate that interventions to change healthcare professional (HCP) behaviour are haphazardly designed and poorly specified. Clarity about methods for designing and specifying interventions is needed. The objective of this review was to identify published methods for designing interventions to change HCP behaviour. A search of MEDLINE, Embase, and PsycINFO was conducted from 1996 to April 2015. Using inclusion/exclusion criteria, a broad screen of abstracts by one rater was followed by a strict screen of full text for all potentially relevant papers by three raters. An inductive approach was first applied to the included studies to identify commonalities and differences between the descriptions of methods across the papers. Based on this process and knowledge of related literatures, we developed a data extraction framework that included, e.g. level of change (e.g. individual versus organization); context of development; a brief description of the method; tasks included in the method (e.g. barrier identification, component selection, use of theory). 3966 titles and abstracts and 64 full-text papers were screened to yield 15 papers included in the review, each outlining one design method. All of the papers reported methods developed within a specific context. Thirteen papers included barrier identification and 13 included linking barriers to intervention components; although not the same 13 papers. Thirteen papers targeted individual HCPs with only one paper targeting change across individual, organization, and system levels. The use of theory and user engagement were included in 13/15 and 13/15 papers, respectively. There is an agreement across methods of four tasks that need to be completed when designing individual-level interventions: identifying barriers, selecting intervention components, using theory, and engaging end-users. Methods also consist of further additional tasks. Examples of methods for designing the organisation and system-level interventions were limited. Further analysis of design tasks could facilitate the development of detailed guidelines for designing interventions.

Journal ArticleDOI
TL;DR: Factors such as clinicians’ attitudes towards scientific evidences and guidelines, the quality of inter-disciplinary relationships, and an organizational ethos of transparency and accountability need to be considered when exploring the readiness of a hospital to adopt CDSSs.
Abstract: Advanced Computerized Decision Support Systems (CDSSs) assist clinicians in their decision-making process, generating recommendations based on up-to-date scientific evidence. Although this technology has the potential to improve the quality of patient care, its mere provision does not guarantee uptake: even where CDSSs are available, clinicians often fail to adopt their recommendations. This study examines the barriers and facilitators to the uptake of an evidence-based CDSS as perceived by diverse health professionals in hospitals at different stages of CDSS adoption. Qualitative study conducted as part of a series of randomized controlled trials of CDSSs. The sample includes two hospitals using a CDSS and two hospitals that aim to adopt a CDSS in the future. We interviewed physicians, nurses, information technology staff, and members of the boards of directors (n = 30). We used a constant comparative approach to develop a framework for guiding implementation. We identified six clusters of experiences of, and attitudes towards CDSSs, which we label as “positions.” The six positions represent a gradient of acquisition of control over CDSSs (from low to high) and are characterized by different types of barriers to CDSS uptake. The most severe barriers (prevalent in the first positions) include clinicians’ perception that the CDSSs may reduce their professional autonomy or may be used against them in the event of medical-legal controversies. Moving towards the last positions, these barriers are substituted by technical and usability problems related to the technology interface. When all barriers are overcome, CDSSs are perceived as a working tool at the service of its users, integrating clinicians’ reasoning and fostering organizational learning. Barriers and facilitators to the use of CDSSs are dynamic and may exist prior to their introduction in clinical contexts; providing a static list of obstacles and facilitators, irrespective of the specific implementation phase and context, may not be sufficient or useful to facilitate uptake. Factors such as clinicians’ attitudes towards scientific evidences and guidelines, the quality of inter-disciplinary relationships, and an organizational ethos of transparency and accountability need to be considered when exploring the readiness of a hospital to adopt CDSSs.

Journal ArticleDOI
David A. Chambers1, Lisa Simpson2, Felicia Hill-Briggs3, Gila Neta1  +379 moreInstitutions (89)
TL;DR: A1 Introduction to the 8th Annual Conference on the Science of Dissemination and Implementation: Optimizing Personal and Population Health.
Abstract: A1 Introduction to the 8th Annual Conference on the Science of Dissemination and Implementation: Optimizing Personal and Population Health

Journal ArticleDOI
TL;DR: The findings suggest that clinical practice is in a constant flux of change; each instance of unlearning and learning is merely a punctuation mark in this spectrum of change and suggests that change is a multi-directional process.
Abstract: Changing clinical practice is a difficult process, best illustrated by the time lag between evidence and use in practice and the extensive use of low-value care. Existing models mostly focus on the barriers to learning and implementing new knowledge. Changing clinical practice, however, includes not only the learning of new practices but also unlearning old and outmoded knowledge. There exists sparse literature regarding the unlearning that takes place at a physician level. Our research objective was to elucidate the experience of trying to abandon an outmoded clinical practice and its relation to learning a new one. We used a grounded theory-based qualitative approach to conduct our study. We conducted 30-min in-person interviews with 15 primary care physicians at the Cleveland VA Medical Center and its clinics. We used a semi-structured interview guide to standardize the interviews. Our two findings include (1) practice change disturbs the status quo equilibrium. Establishing a new equilibrium that incorporates the change may be a struggle; and (2) part of the struggle to establish a new equilibrium incorporating a practice change involves both the “evidence” itself and tensions between evidence and context. Our findings provide evidence-based support for many of the empirical unlearning models that have been adapted to healthcare. Our findings differ from these empirical models in that they refute the static and unidirectional nature of change that previous models imply. Rather, our findings suggest that clinical practice is in a constant flux of change; each instance of unlearning and learning is merely a punctuation mark in this spectrum of change. We suggest that physician unlearning models be modified to reflect the constantly changing nature of clinical practice and demonstrate that change is a multi-directional process.

Journal ArticleDOI
TL;DR: Behavior change interventions including education, training, and enablement in the context of collaborative team-based approaches are effective to change practice of primary healthcare professionals.
Abstract: There is a plethora of interventions and policies aimed at changing practice habits of primary healthcare professionals, but it is unclear which are the most appropriate, sustainable, and effective. We aimed to evaluate the evidence on behavior change interventions and policies directed at healthcare professionals working in primary healthcare centers. Study design: overview of reviews. Data source: MEDLINE (Ovid), Embase (Ovid), The Cochrane Library (Wiley), CINAHL (EbscoHost), and grey literature (January 2005 to July 2015). Study selection: two reviewers independently, and in duplicate, identified systematic reviews, overviews of reviews, scoping reviews, rapid reviews, and relevant health technology reports published in full-text in the English language. Data extraction and synthesis: two reviewers extracted data pertaining to the types of reviews, study designs, number of studies, demographics of the professionals enrolled, interventions, outcomes, and authors’ conclusions for the included studies. We evaluated the methodological quality of the included studies using the AMSTAR scale. For the comparative evaluation, we classified interventions according to the behavior change wheel (Michie et al.). Of 2771 citations retrieved, we included 138 reviews representing 3502 individual studies. The majority of systematic reviews (91%) investigated behavior and practice changes among family physicians. Interactive and multifaceted continuous medical education programs, training with audit and feedback, and clinical decision support systems were found to be beneficial in improving knowledge, optimizing screening rate and prescriptions, enhancing patient outcomes, and reducing adverse events. Collaborative team-based policies involving primarily family physicians, nurses, and pharmacists were found to be most effective. Available evidence on environmental restructuring and modeling was found to be effective in improving collaboration and adherence to treatment guidelines. Limited evidence on nurse-led care approaches were found to be as effective as general practitioners in patient satisfaction in settings like asthma, cardiovascular, and diabetes clinics, although this needs further evaluation. Evidence does not support the use of financial incentives to family physicians, especially for long-term behavior change. Behavior change interventions including education, training, and enablement in the context of collaborative team-based approaches are effective to change practice of primary healthcare professionals. Environmental restructuring approaches including nurse-led care and modeling need further evaluation. Financial incentives to family physicians do not influence long-term practice change.

Journal ArticleDOI
TL;DR: Very low quality evidence from controlled before-after studies suggests that care bundles may reduce the risk of negative outcomes when compared with usual care, and the better qualityEvidence from six randomised trials is more uncertain.
Abstract: Care bundles are a set of three to five evidence-informed practices performed collectively and reliably to improve the quality of care. Care bundles are used widely across healthcare settings with the aim of preventing and managing different health conditions. This is the first systematic review designed to determine the effects of care bundles on patient outcomes and the behaviour of healthcare workers in relation to fidelity with care bundles. This systematic review is reported in line with the PRISMA statement for reporting systematic reviews and meta-analyses. A total of 5796 abstracts were retrieved through a systematic search for articles published between January 1, 2001, to February 4, 2017, in the Cochrane Central Register for Controlled Trials, MEDLINE, EMBASE, British Nursing Index, CINAHL, PsychInfo, British Library, Conference Proceeding Citation Index, OpenGrey trials (including cluster-randomised trials) and non-randomised studies (comprising controlled before-after studies, interrupted time series, cohort studies) of care bundles for any health condition and any healthcare settings were considered. Following the removal of duplicated studies, two reviewers independently screen 3134 records. Three authors performed data extraction independently. We compared the care bundles with usual care to evaluate the effects of care bundles on the risk of negative patient outcomes. Random-effect models were used to further explore the effects of subgroups. In total, 37 studies (6 randomised trials, 31 controlled before-after studies) were eligible for inclusion. The effect of care bundles on patient outcomes is uncertain. For randomised trial data, the pooled relative risk of negative effects between care bundle and control groups was 0.97 [95% CI 0.71 to 1.34; 2049 participants]. The relative risk of negative patient outcomes from controlled before-after studies favoured the care bundle treated groups (0.66 [95% CI 0.59 to 0.75; 119,178 participants]). However, using GRADE, we assessed the certainty of all of the evidence to be very low (downgraded for risk of bias, inconsistency, indirectness). Very low quality evidence from controlled before-after studies suggests that care bundles may reduce the risk of negative outcomes when compared with usual care. By contrast, the better quality evidence from six randomised trials is more uncertain. PROSPERO, CRD42016033175

Journal ArticleDOI
TL;DR: Using the ERAS care system and applying the QUERI model and TDF allow for identification of strategies that can support diffusion and sustainment of innovation of Enhanced Recovery After Surgery across multiple sites within a health care system.
Abstract: Enhanced Recovery After Surgery (ERAS) programs have been shown to have a positive impact on outcome The ERAS care system includes an evidence-based guideline, an implementation program, and an interactive audit system to support practice change The purpose of this study is to describe the use of the Theoretic Domains Framework (TDF) in changing surgical care and application of the Quality Enhancement Research Initiative (QUERI) model to analyze end-to-end implementation of ERAS in colorectal surgery across multiple sites within a single health system The ultimate intent of this work is to allow for the development of a model for spread, scale, and sustainability of ERAS in Alberta Health Services (AHS) ERAS for colorectal surgery was implemented at two sites and then spread to four additional sites The ERAS Interactive Audit System (EIAS) was used to assess compliance with the guidelines, length of stay, readmissions, and complications Data sources informing knowledge translation included surveys, focus groups, interviews, and other qualitative data sources such as minutes and status updates The QUERI model and TDF were used to thematically analyze 189 documents with 2188 quotes meeting the inclusion criteria Data sources were analyzed for barriers or enablers, organized into a framework that included individual to organization impact, and areas of focus for guideline implementation Compliance with the evidence-based guidelines for ERAS in colorectal surgery at baseline was 40% Post implementation compliance, consistent with adoption of best practice, improved to 65% Barriers and enablers were categorized as clinical practice (22%), individual provider (26%), organization (19%), external environment (7%), and patients (25%) In the Alberta context, 26% of barriers and enablers to ERAS implementation occurred at the site and unit levels, with a provider focus 26% of the time, a patient focus 26% of the time, and a system focus 22% of the time Using the ERAS care system and applying the QUERI model and TDF allow for identification of strategies that can support diffusion and sustainment of innovation of Enhanced Recovery After Surgery across multiple sites within a health care system

Journal ArticleDOI
TL;DR: The integrated approach to intervention development, combining theory-, evidence- and person-based approaches, increased the clarity, comprehensiveness and confidence of the theoretical modelling and enabled us to ground the authors' intervention in an in-depth understanding of the barriers and facilitators most relevant to this specific intervention and user population.
Abstract: This paper describes the intervention planning process for the Home and Online Management and Evaluation of Blood Pressure (HOME BP), a digital intervention to promote hypertension self-management. It illustrates how a Person-Based Approach can be integrated with theory- and evidence-based approaches. The Person-Based Approach to intervention development emphasises the use of qualitative research to ensure that the intervention is acceptable, persuasive, engaging and easy to implement. Our intervention planning process comprised two parallel, integrated work streams, which combined theory-, evidence- and person-based elements. The first work stream involved collating evidence from a mixed methods feasibility study, a systematic review and a synthesis of qualitative research. This evidence was analysed to identify likely barriers and facilitators to uptake and implementation as well as design features that should be incorporated in the HOME BP intervention. The second work stream used three complementary approaches to theoretical modelling: developing brief guiding principles for intervention design, causal modelling to map behaviour change techniques in the intervention onto the Behaviour Change Wheel and Normalisation Process Theory frameworks, and developing a logic model. The different elements of our integrated approach to intervention planning yielded important, complementary insights into how to design the intervention to maximise acceptability and ease of implementation by both patients and health professionals. From the primary and secondary evidence, we identified key barriers to overcome (such as patient and health professional concerns about side effects of escalating medication) and effective intervention ingredients (such as providing in-person support for making healthy behaviour changes). Our guiding principles highlighted unique design features that could address these issues (such as online reassurance and procedures for managing concerns). Causal modelling ensured that all relevant behavioural determinants had been addressed, and provided a complete description of the intervention. Our logic model linked the hypothesised mechanisms of action of our intervention to existing psychological theory. Our integrated approach to intervention development, combining theory-, evidence- and person-based approaches, increased the clarity, comprehensiveness and confidence of our theoretical modelling and enabled us to ground our intervention in an in-depth understanding of the barriers and facilitators most relevant to this specific intervention and user population.

Journal ArticleDOI
TL;DR: This work has identified a set of testable, theory-informed hypotheses from a broad range of behavioral and social science that suggest conditions for more effective A&F interventions.
Abstract: Audit and feedback (A&F) is a common strategy for helping health providers to implement evidence into practice. Despite being extensively studied, health care A&F interventions remain variably effective, with overall effect sizes that have not improved since 2003. Contributing to this stagnation is the fact that most health care A&F interventions have largely been designed without being informed by theoretical understanding from the behavioral and social sciences. To determine if the trend can be improved, the objective of this study was to develop a list of testable, theory-informed hypotheses about how to design more effective A&F interventions. Using purposive sampling, semi-structured 60–90-min telephone interviews were conducted with experts in theories related to A&F from a range of fields (e.g., cognitive, health and organizational psychology, medical decision-making, economics). Guided by detailed descriptions of A&F interventions from the health care literature, interviewees described how they would approach the problem of designing improved A&F interventions. Specific, theory-informed hypotheses about the conditions for effective design and delivery of A&F interventions were elicited from the interviews. The resulting hypotheses were assigned by three coders working independently into themes, and categories of themes, in an iterative process. We conducted 28 interviews and identified 313 theory-informed hypotheses, which were placed into 30 themes. The 30 themes included hypotheses related to the following five categories: A&F recipient (seven themes), content of the A&F (ten themes), process of delivery of the A&F (six themes), behavior that was the focus of the A&F (three themes), and other (four themes). We have identified a set of testable, theory-informed hypotheses from a broad range of behavioral and social science that suggest conditions for more effective A&F interventions. This work demonstrates the breadth of perspectives about A&F from non-healthcare-specific disciplines in a way that yields testable hypotheses for healthcare A&F interventions. These results will serve as the foundation for further work seeking to set research priorities among the A&F research community.

Journal ArticleDOI
TL;DR: This study built upon a systematic review of the literature and semi-structured stakeholder interviews that generated 47 criteria for pragmatic measures and aimed to further refine that set of criteria by identifying conceptually distinct categories of the pragmatic measure construct and providing quantitative ratings of the criteria’s clarity and importance.
Abstract: Advancing implementation research and practice requires valid and reliable measures of implementation determinants, mechanisms, processes, strategies, and outcomes. However, researchers and implementation stakeholders are unlikely to use measures if they are not also pragmatic. The purpose of this study was to establish a stakeholder-driven conceptualization of the domains that comprise the pragmatic measure construct. It built upon a systematic review of the literature and semi-structured stakeholder interviews that generated 47 criteria for pragmatic measures, and aimed to further refine that set of criteria by identifying conceptually distinct categories of the pragmatic measure construct and providing quantitative ratings of the criteria’s clarity and importance. Twenty-four stakeholders with expertise in implementation practice completed a concept mapping activity wherein they organized the initial list of 47 criteria into conceptually distinct categories and rated their clarity and importance. Multidimensional scaling, hierarchical cluster analysis, and descriptive statistics were used to analyze the data. The 47 criteria were meaningfully grouped into four distinct categories: (1) acceptable, (2) compatible, (3) easy, and (4) useful. Average ratings of clarity and importance at the category and individual criteria level will be presented. This study advances the field of implementation science and practice by providing clear and conceptually distinct domains of the pragmatic measure construct. Next steps will include a Delphi process to develop consensus on the most important criteria and the development of quantifiable pragmatic rating criteria that can be used to assess measures.

Journal ArticleDOI
TL;DR: Specific elements to enhance the development and reporting of CPGs include enhanced reporting of methodological aspects, the use of frameworks to enhance decision making processes, the inclusion of patient preferences and values, and the consideration of factors influencing applicability of recommendations.
Abstract: Up-to-date, high quality, evidence-based clinical practice guidelines (CPGs) that are applicable for primary healthcare are vital to optimize services for the population with chronic musculoskeletal pain (CMSP). The study aimed to systematically identify and appraise the available evidence-based CPGs for the management of CMSP in adults presenting in primary healthcare settings. A systematic review was conducted. Twelve guideline clearinghouses and six electronic databases were searched for eligible CPGs published between the years 2000 and May 2015. CPGs meeting the inclusion criteria were appraised by three reviewers using the Appraisal of Guidelines Research and Evaluation (AGREE) II. Of the 1082 records identified, 34 were eligible, and 12 CPGs were included based on the inclusion and exclusion criteria. The methodological rigor of CPG development was highly variable, and the median domain score was 66%. The median score for stakeholder involvement was 64%. The lowest median score was obtained for the domain applicability (48%). There was inconsistent use of frameworks to aggregate the level of evidence and the strength of the recommendation in the included CPGs. The scope and content of the included CPGs focussed on opioid prescription. Numerous CPGs that are applicable for the primary healthcare of CMSP exists, varying in their scope and methodological quality. This study highlights specific elements to enhance the development and reporting of CPGs, which may play a role in the uptake of guidelines into clinical practice. These elements include enhanced reporting of methodological aspects, the use of frameworks to enhance decision making processes, the inclusion of patient preferences and values, and the consideration of factors influencing applicability of recommendations. PROSPERO CRD42015022098 .

Journal ArticleDOI
TL;DR: The effects of e-A&F interventions were found to be highly variable and to implicitly target only a fraction of known theoretical domains, even after omitting domains presumed not to be linked to e- A&F.
Abstract: Audit and feedback is a common intervention for supporting clinical behaviour change. Increasingly, health data are available in electronic format. Yet, little is known regarding if and how electronic audit and feedback (e-A&F) improves quality of care in practice. The study aimed to assess the effectiveness of e-A&F interventions in a primary care and hospital context and to identify theoretical mechanisms of behaviour change underlying these interventions. In August 2016, we searched five electronic databases, including MEDLINE and EMBASE via Ovid, and the Cochrane Central Register of Controlled Trials for published randomised controlled trials. We included studies that evaluated e-A&F interventions, defined as a summary of clinical performance delivered through an interactive computer interface to healthcare providers. Data on feedback characteristics, underlying theoretical domains, effect size and risk of bias were extracted by two independent review authors, who determined the domains within the Theoretical Domains Framework (TDF). We performed a meta-analysis of e-A&F effectiveness, and a narrative analysis of the nature and patterns of TDF domains and potential links with the intervention effect. We included seven studies comprising of 81,700 patients being cared for by 329 healthcare professionals/primary care facilities. Given the extremely high heterogeneity of the e-A&F interventions and five studies having a medium or high risk of bias, the average effect was deemed unreliable. Only two studies explicitly used theory to guide intervention design. The most frequent theoretical domains targeted by the e-A&F interventions included ‘knowledge’, ‘social influences’, ‘goals’ and ‘behaviour regulation‘, with each intervention targeting a combination of at least three. None of the interventions addressed the domains ‘social/professional role and identity’ or ‘emotion’. Analyses identified the number of different domains coded in control arm to have the biggest role in heterogeneity in e-A&F effect size. Given the high heterogeneity of identified studies, the effects of e-A&F were found to be highly variable. Additionally, e-A&F interventions tend to implicitly target only a fraction of known theoretical domains, even after omitting domains presumed not to be linked to e-A&F. Also, little evaluation of comparative effectiveness across trial arms was conducted. Future research should seek to further unpack the theoretical domains essential for effective e-A&F in order to better support strategic individual and team goals.

Journal ArticleDOI
TL;DR: LOCI has been developed to be a feasible and effective approach for organizations to create a positive climate and fertile context for EBP implementation and seeks to cultivate and sustain both effective general and implementation leadership as well as organizational strategies and support that will remain after the study has ended.
Abstract: Evidence-based practice (EBP) implementation represents a strategic change in organizations that requires effective leadership and alignment of leadership and organizational support across organizational levels. As such, there is a need for combining leadership development with organizational strategies to support organizational climate conducive to EBP implementation. The leadership and organizational change for implementation (LOCI) intervention includes leadership training for workgroup leaders, ongoing implementation leadership coaching, 360° assessment, and strategic planning with top and middle management regarding how they can support workgroup leaders in developing a positive EBP implementation climate. This test of the LOCI intervention will take place in conjunction with the implementation of motivational interviewing (MI) in 60 substance use disorder treatment programs in California, USA. Participants will include agency executives, 60 program leaders, and approximately 360 treatment staff. LOCI will be tested using a multiple cohort, cluster randomized trial that randomizes workgroups (i.e., programs) within agency to either LOCI or a webinar leadership training control condition in three consecutive cohorts. The LOCI intervention is 12 months, and the webinar control intervention takes place in months 1, 5, and 8, for each cohort. Web-based surveys of staff and supervisors will be used to collect data on leadership, implementation climate, provider attitudes, and citizenship. Audio recordings of counseling sessions will be coded for MI fidelity. The unit of analysis will be the workgroup, randomized by site within agency and with care taken that co-located workgroups are assigned to the same condition to avoid contamination. Hierarchical linear modeling (HLM) will be used to analyze the data to account for the nested data structure. LOCI has been developed to be a feasible and effective approach for organizations to create a positive climate and fertile context for EBP implementation. The approach seeks to cultivate and sustain both effective general and implementation leadership as well as organizational strategies and support that will remain after the study has ended. Development of a positive implementation climate for MI should result in more positive service provider attitudes and behaviors related to the use of MI and, ultimately, higher fidelity in the use of MI. This study is registered with Clinicaltrials.gov ( NCT03042832 ), 2 February 2017, retrospectively registered.

Journal ArticleDOI
TL;DR: Despite considerable evidence supporting various specific therapies for stroke care, uptake of these therapies is compromised by barriers across organisational, patients, guideline interventions and health professionals’ domains, and it is recommended that future interventions andhealth policy directions should be informed by these findings.
Abstract: Adoption of contemporary evidence-based guidelines for acute stroke management is often delayed due to a range of key enablers and barriers. Recent reviews on such barriers focus mainly on specific acute stroke therapies or generalised stroke care guidelines. This review examined the overall barriers and enablers, as perceived by health professionals which affect how evidence-based practice guidelines (stroke unit care, thrombolysis administration, aspirin usage and decompressive surgery) for acute stroke care are adopted in hospital settings. A systematic search of databases was conducted using MEDLINE, Cumulative Index to Nursing and Allied Health Literature (CINAHL), Embase, PsycINFO, Cochrane Library and AMED (Allied and Complementary Medicine Database from 1990 to 2016. The population of interest included health professionals working clinically or in roles responsible for acute stroke care. There were no restrictions to the study designs. A quality appraisal tool for qualitative studies by the Joanna Briggs Institute and another for quantitative studies by the Centre for Evidence-Based Management were used in the present study. A recent checklist to classify barriers and enablers to health professionals’ adherence to evidence-based practice was also used. Ten studies met the inclusion criteria out of a total of 9832 search results. The main barriers or enablers identified included poor organisational or institutional level support, health professionals’ limited skills or competence to use a particular therapy, low level of awareness, familiarity or confidence in the effectiveness of a particular evidence-based therapy, limited medical facilities to support evidence uptake, inadequate peer support among health professionals’, complex nature of some stroke care therapies or guidelines and patient level barriers. Despite considerable evidence supporting various specific therapies for stroke care, uptake of these therapies is compromised by barriers across organisational, patients, guideline interventions and health professionals’ domains. As a result, we recommend that future interventions and health policy directions should be informed by these findings in order to optimise uptake of best practice acute stroke care. Further studies from low- to middle-income countries are needed to understand the barriers and enablers in such settings. The review protocol was registered in the international prospective register of systematic reviews, PROSPERO 2015 (Registration Number: CRD42015023481 )

Journal ArticleDOI
TL;DR: The ICAN QUIT in Pregnancy was an intervention to train health providers at Aboriginal Medical Services in how to implement culturally competent evidence-based practice including counselling and nicotine replacement therapy for pregnant patients who smoke.
Abstract: Indigenous smoking rates are up to 80% among pregnant women: prevalence among pregnant Australian Indigenous women was 45% in 2014, contributing significantly to the health gap for Indigenous Australians. We aimed to develop an implementation intervention to improve smoking cessation care (SCC) for pregnant Indigenous smokers, an outcome to be achieved by training health providers at Aboriginal Medical Services (AMS) in a culturally competent approach, developed collaboratively with AMS. The Behaviour Change Wheel (BCW), incorporating the COM-B model (capability, opportunity and motivation for behavioural interventions), provided a framework for the development of the Indigenous Counselling and Nicotine (ICAN) QUIT in Pregnancy implementation intervention at provider and patient levels. We identified evidence-practice gaps through (i) systematic literature reviews, (ii) a national survey of clinicians and (iii) a qualitative study of smoking and quitting with Aboriginal mothers. We followed the three stages recommended in Michie et al.’s “Behaviour Change Wheel” guide. Targets identified for health provider behaviour change included the following: capability (psychological capability, knowledge and skills) by training clinicians in pharmacotherapy to assist women to quit; motivation (optimism) by presenting evidence of effectiveness, and positive testimonials from patients and clinicians; and opportunity (environmental context and resources) by promoting a whole-of-service approach and structuring consultations using a flipchart and prompts. Education and training were selected as the main intervention functions. For health providers, the delivery mode was webinar, to accommodate time and location constraints, bringing the training to the services; for patients, face-to-face consultations were supported by a booklet embedded with videos to improve patients’ capability, opportunity and motivation. The ICAN QUIT in Pregnancy was an intervention to train health providers at Aboriginal Medical Services in how to implement culturally competent evidence-based practice including counselling and nicotine replacement therapy for pregnant patients who smoke. The BCW aided in scientifically and systematically informing this targeted implementation intervention based on the identified gaps in SCC by health providers. Multiple factors impact at systemic, provider, community and individual levels. This process was therefore important for defining the design and intervention components, prior to a conducting a pilot feasibility trial, then leading on to a full clinical trial.

Journal ArticleDOI
TL;DR: Few studies assessing strategies for scaling up EBPs in primary care settings are found, and it is uncertain whether any strategies were effective as most studies focused more on patient/provider outcomes and less on scaling-up process outcomes.
Abstract: While an extensive array of existing evidence-based practices (EBPs) have the potential to improve patient outcomes, little is known about how to implement EBPs on a larger scale. Therefore, we sought to identify effective strategies for scaling up EBPs in primary care. We conducted a systematic review with the following inclusion criteria: (i) study design: randomized and non-randomized controlled trials, before-and-after (with/without control), and interrupted time series; (ii) participants: primary care-related units (e.g., clinical sites, patients); (iii) intervention: any strategy used to scale up an EBP; (iv) comparator: no restrictions; and (v) outcomes: no restrictions. We searched MEDLINE, Embase, PsycINFO, Web of Science, CINAHL, and the Cochrane Library from database inception to August 2016 and consulted clinical trial registries and gray literature. Two reviewers independently selected eligible studies, then extracted and analyzed data following the Cochrane methodology. We extracted components of scaling-up strategies and classified them into five categories: infrastructure, policy/regulation, financial, human resources-related, and patient involvement. We extracted scaling-up process outcomes, such as coverage, and provider/patient outcomes. We validated data extraction with study authors. We included 14 studies. They were published since 2003 and primarily conducted in low-/middle-income countries (n = 11). Most were funded by governmental organizations (n = 8). The clinical area most represented was infectious diseases (HIV, tuberculosis, and malaria, n = 8), followed by newborn/child care (n = 4), depression (n = 1), and preventing seniors’ falls (n = 1). Study designs were mostly before-and-after (without control, n = 8). The most frequently targeted unit of scaling up was the clinical site (n = 11). The component of a scaling-up strategy most frequently mentioned was human resource-related (n = 12). All studies reported patient/provider outcomes. Three studies reported scaling-up coverage, but no study quantitatively reported achieving a coverage of 80% in combination with a favorable impact. We found few studies assessing strategies for scaling up EBPs in primary care settings. It is uncertain whether any strategies were effective as most studies focused more on patient/provider outcomes and less on scaling-up process outcomes. Minimal consensus on the metrics of scaling up are needed for assessing the scaling up of EBPs in primary care. This review is registered as PROSPERO CRD42016041461 .

Journal ArticleDOI
TL;DR: This study makes an important contribution to the limited experimental evidence regarding strategies to improve implementation of school nutrition policies and suggests that, with multi-strategic support, implementation of healthy canteen policies can be achieved in most schools.
Abstract: Internationally, governments have implemented school-based nutrition policies to restrict the availability of unhealthy foods from sale. The aim of the trial was to assess the effectiveness of a multi-strategic intervention to increase implementation of a state-wide healthy canteen policy. The impact of the intervention on the energy, total fat, and sodium of children’s canteen purchases and on schools’ canteen revenue was also assessed. Australian primary schools with a canteen were randomised to receive a 12–14-month, multi-strategic intervention or to a no intervention control group. The intervention sought to increase implementation of a state-wide healthy canteen policy which required schools to remove unhealthy items (classified as ‘red’ or ‘banned’) from regular sale and encouraged schools to ‘fill the menu’ with healthy items (classified as ‘green’). The intervention strategies included allocation of a support officer to assist with policy implementation, engagement of school principals and parent committees, consensus processes with canteen managers, training, provision of tools and resources, academic detailing, performance feedback, recognition and marketing initiatives. Data were collected at baseline (April to September, 2013) and at completion of the implementation period (November, 2014 to April, 2015). Seventy schools participated in the trial. Relative to control, at follow-up, intervention schools were significantly more likely to have menus without ‘red’ or ‘banned’ items (RR = 21.11; 95% CI 3.30 to 147.28; p ≤ 0.01) and to have at least 50% of menu items classified as ‘green’ (RR = 3.06; 95% CI 1.64 to 5.68; p ≤ 0.01). At follow-up, student purchases from intervention school canteens were significantly lower in total fat (difference = −1.51 g; 95% CI −2.84 to −0.18; p = 0.028) compared to controls, but not in energy (difference = −132.32 kJ; 95% CI −280.99 to 16.34; p = 0.080) or sodium (difference = −46.81 mg; 95% CI −96.97 to 3.35; p = 0.067). Canteen revenue did not differ significantly between groups. Poor implementation of evidence-based school nutrition policies is a problem experienced by governments internationally, and one with significant implications for public health. The study makes an important contribution to the limited experimental evidence regarding strategies to improve implementation of school nutrition policies and suggests that, with multi-strategic support, implementation of healthy canteen policies can be achieved in most schools. Australian New Zealand Clinical Trials Registry ( ACTRN12613000311752 )

Journal ArticleDOI
TL;DR: The utility of organizational theories for implementation research is demonstrated by applying four well-known organizational theories to published descriptions of efforts to implement SafeCare, an evidence-based practice for preventing child abuse and neglect.
Abstract: Even under optimal internal organizational conditions, implementation can be undermined by changes in organizations’ external environments, such as fluctuations in funding, adjustments in contracting practices, new technology, new legislation, changes in clinical practice guidelines and recommendations, or other environmental shifts. Internal organizational conditions are increasingly reflected in implementation frameworks, but nuanced explanations of how organizations’ external environments influence implementation success are lacking in implementation research. Organizational theories offer implementation researchers a host of existing, highly relevant, and heretofore largely untapped explanations of the complex interaction between organizations and their environment. In this paper, we demonstrate the utility of organizational theories for implementation research. We applied four well-known organizational theories (institutional theory, transaction cost economics, contingency theories, and resource dependency theory) to published descriptions of efforts to implement SafeCare, an evidence-based practice for preventing child abuse and neglect. Transaction cost economics theory explained how frequent, uncertain processes for contracting for SafeCare may have generated inefficiencies and thus compromised implementation among private child welfare organizations. Institutional theory explained how child welfare systems may have been motivated to implement SafeCare because doing so aligned with expectations of key stakeholders within child welfare systems’ professional communities. Contingency theories explained how efforts such as interagency collaborative teams promoted SafeCare implementation by facilitating adaptation to child welfare agencies’ internal and external contexts. Resource dependency theory (RDT) explained how interagency relationships, supported by contracts, memoranda of understanding, and negotiations, facilitated SafeCare implementation by balancing autonomy and dependence on funding agencies and SafeCare developers. In addition to the retrospective application of organizational theories demonstrated above, we advocate for the proactive use of organizational theories to design implementation research. For example, implementation strategies should be selected to minimize transaction costs, promote and maintain congruence between organizations’ dynamic internal and external contexts over time, and simultaneously attend to organizations’ financial needs while preserving their autonomy. We describe implications of applying organizational theory in implementation research for implementation strategies, the evaluation of implementation efforts, measurement, research design, theory, and practice. We also offer guidance to implementation researchers for applying organizational theory.

Journal ArticleDOI
TL;DR: In this paper, a systematic review was conducted to evaluate the effectiveness of research implementation strategies for promoting evidence-informed policy and management decisions in healthcare, and the secondary aim was to describe factors perceived to be associated with effective strategies and the inter-relationship between these factors.
Abstract: It is widely acknowledged that health policy and management decisions rarely reflect research evidence. Therefore, it is important to determine how to improve evidence-informed decision-making. The primary aim of this systematic review was to evaluate the effectiveness of research implementation strategies for promoting evidence-informed policy and management decisions in healthcare. The secondary aim of the review was to describe factors perceived to be associated with effective strategies and the inter-relationship between these factors. An electronic search was developed to identify studies published between January 01, 2000, and February 02, 2016. This was supplemented by checking the reference list of included articles, systematic reviews, and hand-searching publication lists from prominent authors. Two reviewers independently screened studies for inclusion, assessed methodological quality, and extracted data. After duplicate removal, the search strategy identified 3830 titles. Following title and abstract screening, 96 full-text articles were reviewed, of which 19 studies (21 articles) met all inclusion criteria. Three studies were included in the narrative synthesis, finding policy briefs including expert opinion might affect intended actions, and intentions persisting to actions for public health policy in developing nations. Workshops, ongoing technical assistance, and distribution of instructional digital materials may improve knowledge and skills around evidence-informed decision-making in US public health departments. Tailored, targeted messages were more effective in increasing public health policies and programs in Canadian public health departments compared to messages and a knowledge broker. Sixteen studies (18 articles) were included in the thematic synthesis, leading to a conceptualisation of inter-relating factors perceived to be associated with effective research implementation strategies. A unidirectional, hierarchal flow was described from (1) establishing an imperative for practice change, (2) building trust between implementation stakeholders and (3) developing a shared vision, to (4) actioning change mechanisms. This was underpinned by the (5) employment of effective communication strategies and (6) provision of resources to support change. Evidence is developing to support the use of research implementation strategies for promoting evidence-informed policy and management decisions in healthcare. The design of future implementation strategies should be based on the inter-relating factors perceived to be associated with effective strategies. This systematic review was registered with Prospero (record number: 42016032947).