Charles A. Rapp
Bio: Charles A. Rapp is an academic researcher from University of Kansas. The author has contributed to research in topics: Mental health & Evidence-based practice. The author has an hindex of 39, co-authored 87 publications receiving 5399 citations.
Papers published on a yearly basis
TL;DR: The authors discuss barriers to implementation and strategies for overcoming them based on successful experiences in several states, and the effectiveness of supported employment appears to be generalizable across a broad range of client characteristics and community settings.
Abstract: Supported employment for people with severe mental illness is an evidence-based practice, based on converging findings from eight randomized controlled trials and three quasi-experimental studies. The critical ingredients of supported employment have been well described, and a fidelity scale differentiates supported employment programs from other types of vocational services. The effectiveness of supported employment appears to be generalizable across a broad range of client characteristics and community settings. More research is needed on long-term outcomes and on cost-effectiveness. Access to supported employment programs remains a problem, despite their increasing use throughout the United States. The authors discuss barriers to implementation and strategies for overcoming them based on successful experiences in several states.
04 Sep 1997
TL;DR: This book explains the purpose, principles and research results of the Strengths Model, and discusses the role of engagement and relationship in the development of this Paradigm.
Abstract: Foreword Preface I. History, Critique and Useful Conceptions: Towards a Strengths Paradigm II. A Beginning Theory of Strengths III. The Purpose, Principles and Research Results of the Strengths Model IV. Engagement and Relationship: A New Partnership V. Strengths Assessment: Amplifying the Well Part of the Individual VI. Personal Planning: Creating the Achievement Agenda VII. Resource Acquisition: Putting Community Back into Community Mental Health VIII. Supportive Case Management Context: Creating the Conditions for Effectiveness IX. Strengths Model Epilogue: Commonly Asked Questions (Objections) and Managed Care References
TL;DR: Fidelity outcomes for five evidence-based practices that were implemented in routine public mental health settings in the National Implementing Evidence-Based Practices Project showed an increase in fidelity from baseline to 12 months, with scores leveling off between 12 and 24 months.
Abstract: Objective: This article presents fidelity outcomes for five evidence-based practices that were implemented in routine public mental health settings in the National Implementing Evidence-Based Practices Project. Methods: Over a two-year period 53 community mental health centers across eight states implemented one of five evidence-based practices: supported employment, assertive community treatment, integrated dual disorders treatment, family psychoeducation, and illness management and recovery. An intervention model of practice dissemination guided the implementation. Each site used both human resources (consultant-trainers) and material resource (toolkits) to aid practice implementation and to facilitate organizational changes. External assessors rated fidelity to the evidence-based practice model every six months from baseline to two years. Results: More than half of the sites (29 of 53, or 55%) showed highfidelity implementation at the end of two years. Significant differences in fidelity emerged by evidence-based practice. Supported employment and assertive community treatment had higher fidelity scores at baseline and across time. Illness management and recovery and integrated dual disorders treatment had lower scores on average throughout. In general, evidence-based practices showed an increase in fidelity from baseline to 12 months, with scores leveling off between 12 and 24 months. Conclusions: Most mental health centers implemented these evidence-based practices with moderate to high fidelity. The critical time period for implementation was approximately 12 months, after which few gains were made, although sites sustained their attained levels of evidence-based practice fidelity for another year. (Psychiatric Services 58:1279–1284, 2007)
10 Feb 2006
TL;DR: This chapter discusses the purpose, principles, and research results of the Strengths Model, and the Purpose, Principles, and Research Results of the Strenghts Model.
Abstract: 1. History, Critique, and Useful Conceptions: Toward a Strengths Paradigm 2. A Beginning Theory of Strengths 3. The Purpose, Principles, and Research Results of the Strengths Model 4. Engagement and Relationship: A New Partnership 5. Strengths Assessment: Amplifying the Well Part of the Individual 6. Personal Planning: Creating the Achievement Agenda 7. Resource Acquisition: Putting Community Back into Community Mental Health 8. Supportive Case Management Context: Creating the Conditions for Effectiveness 9. Strenghts Model Epilogue: Commonly Asked Questions (Objections) and Managed Care
TL;DR: Deming's theory of management based on the 14 Points for Management is described in Out of the Crisis, originally published in 1982 as mentioned in this paper, where he explains the principles of management transformation and how to apply them.
Abstract: According to W. Edwards Deming, American companies require nothing less than a transformation of management style and of governmental relations with industry. In Out of the Crisis, originally published in 1982, Deming offers a theory of management based on his famous 14 Points for Management. Management's failure to plan for the future, he claims, brings about loss of market, which brings about loss of jobs. Management must be judged not only by the quarterly dividend, but by innovative plans to stay in business, protect investment, ensure future dividends, and provide more jobs through improved product and service. In simple, direct language, he explains the principles of management transformation and how to apply them.
TL;DR: Decision aids reduced the proportion of undecided participants and appeared to have a positive effect on patient-clinician communication, and those exposed to a decision aid were either equally or more satisfied with their decision, the decision-making process, and the preparation for decision making compared to usual care.
Abstract: Background Decision aids are intended to help people participate in decisions that involve weighing the benefits and harms of treatment options often with scientific uncertainty. Objectives To assess the effects of decision aids for people facing treatment or screening decisions. Search methods For this update, we searched from 2009 to June 2012 in MEDLINE; CENTRAL; EMBASE; PsycINFO; and grey literature. Cumulatively, we have searched each database since its start date including CINAHL (to September 2008). Selection criteria We included published randomized controlled trials of decision aids, which are interventions designed to support patients' decision making by making explicit the decision, providing information about treatment or screening options and their associated outcomes, compared to usual care and/or alternative interventions. We excluded studies of participants making hypothetical decisions. Data collection and analysis Two review authors independently screened citations for inclusion, extracted data, and assessed risk of bias. The primary outcomes, based on the International Patient Decision Aid Standards (IPDAS), were: A) 'choice made' attributes; B) 'decision-making process' attributes. Secondary outcomes were behavioral, health, and health-system effects. We pooled results using mean differences (MD) and relative risks (RR), applying a random-effects model. Main results This update includes 33 new studies for a total of 115 studies involving 34,444 participants. For risk of bias, selective outcome reporting and blinding of participants and personnel were mostly rated as unclear due to inadequate reporting. Based on 7 items, 8 of 115 studies had high risk of bias for 1 or 2 items each. Of 115 included studies, 88 (76.5%) used at least one of the IPDAS effectiveness criteria: A) 'choice made' attributes criteria: knowledge scores (76 studies); accurate risk perceptions (25 studies); and informed value-based choice (20 studies); and B) 'decision-making process' attributes criteria: feeling informed (34 studies) and feeling clear about values (29 studies). A) Criteria involving 'choice made' attributes: Compared to usual care, decision aids increased knowledge (MD 13.34 out of 100; 95% confidence interval (CI) 11.17 to 15.51; n = 42). When more detailed decision aids were compared to simple decision aids, the relative improvement in knowledge was significant (MD 5.52 out of 100; 95% CI 3.90 to 7.15; n = 19). Exposure to a decision aid with expressed probabilities resulted in a higher proportion of people with accurate risk perceptions (RR 1.82; 95% CI 1.52 to 2.16; n = 19). Exposure to a decision aid with explicit values clarification resulted in a higher proportion of patients choosing an option congruent with their values (RR 1.51; 95% CI 1.17 to 1.96; n = 13). B) Criteria involving 'decision-making process' attributes: Decision aids compared to usual care interventions resulted in: a) lower decisional conflict related to feeling uninformed (MD -7.26 of 100; 95% CI -9.73 to -4.78; n = 22) and feeling unclear about personal values (MD -6.09; 95% CI -8.50 to -3.67; n = 18); b) reduced proportions of people who were passive in decision making (RR 0.66; 95% CI 0.53 to 0.81; n = 14); and c) reduced proportions of people who remained undecided post-intervention (RR 0.59; 95% CI 0.47 to 0.72; n = 18). Decision aids appeared to have a positive effect on patient-practitioner communication in all nine studies that measured this outcome. For satisfaction with the decision (n = 20), decision-making process (n = 17), and/or preparation for decision making (n = 3), those exposed to a decision aid were either more satisfied, or there was no difference between the decision aid versus comparison interventions. No studies evaluated decision-making process attributes for helping patients to recognize that a decision needs to be made, or understanding that values affect the choice. C) Secondary outcomes Exposure to decision aids compared to usual care reduced the number of people of choosing major elective invasive surgery in favour of more conservative options (RR 0.79; 95% CI 0.68 to 0.93; n = 15). Exposure to decision aids compared to usual care reduced the number of people choosing to have prostate-specific antigen screening (RR 0.87; 95% CI 0.77 to 0.98; n = 9). When detailed compared to simple decision aids were used, fewer people chose menopausal hormone therapy (RR 0.73; 95% CI 0.55 to 0.98; n = 3). For other decisions, the effect on choices was variable. The effect of decision aids on length of consultation varied from 8 minutes shorter to 23 minutes longer (median 2.55 minutes longer) with 2 studies indicating statistically-significantly longer, 1 study shorter, and 6 studies reporting no difference in consultation length. Groups of patients receiving decision aids do not appear to differ from comparison groups in terms of anxiety (n = 30), general health outcomes (n = 11), and condition-specific health outcomes (n = 11). The effects of decision aids on other outcomes (adherence to the decision, costs/resource use) were inconclusive. Authors' conclusions There is high-quality evidence that decision aids compared to usual care improve people's knowledge regarding options, and reduce their decisional conflict related to feeling uninformed and unclear about their personal values. There is moderate-quality evidence that decision aids compared to usual care stimulate people to take a more active role in decision making, and improve accurate risk perceptions when probabilities are included in decision aids, compared to not being included. There is low-quality evidence that decision aids improve congruence between the chosen option and the patient's values. New for this updated review is further evidence indicating more informed, values-based choices, and improved patient-practitioner communication. There is a variable effect of decision aids on length of consultation. Consistent with findings from the previous review, decision aids have a variable effect on choices. They reduce the number of people choosing discretionary surgery and have no apparent adverse effects on health outcomes or satisfaction. The effects on adherence with the chosen option, cost-effectiveness, use with lower literacy populations, and level of detail needed in decision aids need further evaluation. Little is known about the degree of detail that decision aids need in order to have a positive effect on attributes of the choice made, or the decision-making process.
TL;DR: This paper reviews the principles and practice of purposeful sampling in implementation research, summarizes types and categories of purposefully sampling strategies and provides a set of recommendations for use of single strategy or multistage strategy designs, particularly for state implementation research.
Abstract: Purposeful sampling is widely used in qualitative research for the identification and selection of information-rich cases related to the phenomenon of interest. Although there are several different purposeful sampling strategies, criterion sampling appears to be used most commonly in implementation research. However, combining sampling strategies may be more appropriate to the aims of implementation research and more consistent with recent developments in quantitative methods. This paper reviews the principles and practice of purposeful sampling in implementation research, summarizes types and categories of purposeful sampling strategies and provides a set of recommendations for use of single strategy or multistage strategy designs, particularly for state implementation research.
TL;DR: A heuristic, working “taxonomy” of eight conceptually distinct implementation outcomes—acceptability, adoption, appropriateness, feasibility, fidelity, implementation cost, penetration, and sustainability—along with their nominal definitions is proposed.
Abstract: An unresolved issue in the field of implementation research is how to conceptualize and evaluate successful implementation. This paper advances the concept of “implementation outcomes” distinct from service system and clinical treatment outcomes. This paper proposes a heuristic, working “taxonomy” of eight conceptually distinct implementation outcomes—acceptability, adoption, appropriateness, feasibility, fidelity, implementation cost, penetration, and sustainability—along with their nominal definitions. We propose a two-pronged agenda for research on implementation outcomes. Conceptualizing and measuring implementation outcomes will advance understanding of implementation processes, enhance efficiency in implementation research, and pave the way for studies of the comparative effectiveness of implementation strategies.