scispace - formally typeset
Search or ask a question

Showing papers in "American Journal of Evaluation in 2006"


Journal ArticleDOI
TL;DR: Although the general inductive approach is not as strong as some other analytic strategies for theory or model development, it does provide a simple, straightforward approach for deriving findings in the context of focused evaluation questions.
Abstract: A general inductive approach for analysis of qualitative evaluation data is described. The purposes for using an inductive approach are to (a) condense raw textual data into a brief, summary format; (b) establish clear links between the evaluation or research objectives and the summary findings derived from the raw data; and (c) develop a framework of the underlying structure of expe- riences or processes that are evident in the raw data. The general inductive approach provides an easily used and systematic set of procedures for analyzing qualitative data that can produce reliable and valid findings. Although the general inductive approach is not as strong as some other analytic strategies for theory or model development, it does provide a simple, straightforward approach for deriving findings in the context of focused evaluation questions. Many evaluators are likely to find using a general inductive approach less complicated than using other approaches to qualitative data analysis.

8,199 citations


Journal ArticleDOI
TL;DR: The authors extend prior work on measuring collaboration by exploring the reliability of the scale and developing a format for visually displaying the results obtained from using the scale.
Abstract: Collaboration is a prerequisite for the sustainability of interagency programs, particu- larly those programs initially created with the support of time-limited grant-funding sources. From the perspective of evaluators, however, assessing collaboration among grant partners is often difficult. It is also challenging to present collaboration data to stakeholders in a way that is meaningful. In this article, the authors introduce the Levels of Collaboration Scale, which was developed from existing models and instruments. The authors extend prior work on measuring collaboration by exploring the reliability of the scale and developing a format for visually displaying the results obtained from using the scale.

201 citations


Journal ArticleDOI
TL;DR: In this paper, the authors systematically examined 47 case examples of empowerment evaluation published from 1994 through June 2005 and found wide variation among practitioners in adherence to empowerment evaluation principles and weak emphasis on the attainment of empowered outcomes for program beneficiaries.
Abstract: Empowerment evaluation entered the evaluation lexicon in 1993. Since that time, it has attracted many adherents, as well as vocal detractors. A prominent issue in the debates on empowerment evaluation concerns the extent to which empowerment evaluation can be readily distinguished from other approaches to evaluation that share with it an emphasis on participatory and collaborative processes, capacity development, and evaluation use. A second issue concerns the extent to which empowerment evaluation actually leads to empowered outcomes for those who have participated in the evaluation process and those who are the intended beneficiaries of the social programs that were the objects of evaluation. The authors systematically examined 47 case examples of empowerment evaluation published from 1994 through June 2005. The results suggest wide variation among practitioners in adherence to empowerment evaluation principles and weak emphasis on the attainment of empowered outcomes for program beneficiaries. Implicat...

137 citations


Journal ArticleDOI
TL;DR: In this paper, recent books applicable to the broad field of program evaluation are reviewed, and two or more reviews may be commissioned for books judged by the Editor and/or Book Review Editor to be especially noteworthy works in evaluation.
Abstract: In this section, recent books applicable to the broad field of program evaluation are reviewed. In most cases, a single book will be considered in a review, but in some instances, multiple books may be jointly reviewed to illuminate similarities and differences in intent, philosophy, and usefulness. In most cases, a single review will be commissioned for each book. Two or more reviews may be commissioned for books judged by the Editor and/or Book Review Editor to be especially noteworthy works in evaluation. Persons with suggestions of books to be reviewed, or those who wish to submit a review, should contact Lori A. Wingate at lori.wingate@wmich.edu.

107 citations


Journal ArticleDOI
TL;DR: In this article, the authors illustrate the practice and politics of responsive evaluation with case examples from two policy fields, arts education and mental health care, and show that process-oriented heuristics have been developed to deal with the unequal social relations and power in responsive evaluation.
Abstract: Responsive evaluation offers a perspective in which evaluation is reframed from the assessment of program interventions on the basis of policy makers' goals to an engagement with and among all stakeholders about the value and meaning of their practice. Responsive evaluators have to be extra sensitive to power relations given the deliberate attempts to acknowledge ambiguity and the plurality of interests and values and to foster genuine dialogue. The author illustrates the practice and politics of responsive evaluation with case examples from two policy fields, arts education and mental health care. In these evaluation studies, process-oriented heuristics have been developed to deal with the unequal social relations and power in responsive evaluation. As such, responsive evaluation offers an interesting example of the politics of evaluation. The emerging heuristics may be helpful to other evaluation approaches.

93 citations


Journal ArticleDOI
TL;DR: In this paper, the authors describe the challenges of launching a successful school program and evaluation, with lessons learned from three projects that focus on intimate partner violence, and emphasize the need for flexibility and cultural awareness during all stages of the process.
Abstract: The current emphasis on best practices for school-based health and mental health programs brings with it the demand for evaluation efforts in schools. This article describes the challenges of launching a successful school program and evaluation, with lessons learned from three projects that focus on intimate partner violence. The authors discuss issues related to constraints on the research design in schools, the recruitment of schools and participants within schools, program and evaluation implementation issues, the iterative implementation-evaluation cycle, and the dissemination of programs and study findings. The authors emphasize the need for flexibility and cultural awareness during all stages of the process.

83 citations


Journal ArticleDOI
TL;DR: In this article, the authors investigate the potential of using social network analysis to evaluate programs that aim at improving schools by fostering greater collaboration between teachers, and find that although the majority of teachers consider collecting social network data to be problematic but feasible, some teachers report concerns about privacy and the effect on their school's goals to foster community if the data are shared with their schools.
Abstract: This article describes results of a study investigating the potential of using social network analysis to evaluate programs that aim at improving schools by fostering greater collaboration between teachers. The goal of this method is to use data about teacher collaboration within schools to map the distribution of expertise and resources needed to enact reforms. Such maps are of great potential value to school leaders, who are responsible for instructional leadership in schools, but they also include information that could bring harm to individuals and school communities. In this arti- cle, the authors describe interview findings about concerns educators have with collecting and shar- ing social network data. A chief finding is that although the majority of teachers consider collecting social network data to be problematic but feasible, some teachers report concerns about privacy and the effect on their school's goals to foster community if the data are shared with their schools.

77 citations


Journal ArticleDOI
TL;DR: The Essential Competencies for Program Evaluators (ECPE) as mentioned in this paper is an interactive professional development unit that engages both novice and experienced evaluators in learning about the essential competencies for program evaluation contexts.
Abstract: This article describes an interactive professional development unit that engages both novice and experienced evaluators in (a) learning about the Essential Competencies for Program Evaluators (ECPE), (b) applying the competencies to program evaluation contexts, and (c) using the ECPE to reflect on their own professional practices. The article briefly summarizes current issues about program evaluator competencies and the components of effective professional development. It then describes the ECPE; the objectives, content, and process of the professional development session; and the ECPE Self-Assessment Instrument. Facilitators can adapt and use the unit in a variety of settings, including university courses and program evaluation conferences.

73 citations


Journal ArticleDOI
TL;DR: In this article, the authors review the literature and practice concerned with the evaluation of science, technology, and innovation (STI) policies and the way these relate to theories of the innovation process.
Abstract: This article reviews the literature and practice concerned with the evaluation of science, technology, and innovation (STI) policies and theway these relate to theories of the innovation process. Referring to the experience of the European Union (EU), the authors review the attempts to ensure that the STI policy theory is informed by advances in the authors'understanding of the innovation process. They argue, however, that the practice of policy evaluation lags behind advances in innovation theory. Despite the efforts to promote theory-led evaluations of STI policies based on new theories of the systemic nature of innovation, evaluation practice in the EU continues to favor the development of methods implicitly based on outdated linear views of the innovation process. This article examines the reasons why this is the case and suggests that STI policy evaluation should nevertheless be supported by the evolving theoretical understanding of the innovation process.

64 citations


Journal ArticleDOI
TL;DR: In this paper, a framework for building evaluation capacity is proposed based on four strategic methods for teaching evaluation: (a) using logic models for sound program plan-ning, (b) providing one-on-one help, (c) facilitating small-team collaborative evaluations, and (d) conducting large-scale multisite evaluations.
Abstract: Developing evaluation capacity in organizations is a complex and multifaceted task. This article outlines a framework for building evaluation capacity. The framework is based on four strategic methods for teaching evaluation: (a) using logic models for sound program plan- ning, (b) providing one-on-one help, (c) facilitating small-team collaborative evaluations, and (d) conducting large-scale multisite evaluations. The article also reports the results of using the framework successfully with Extension 4-H field educators. 4-H educators who were trained using this method significantly increased their evaluation knowledge and skill and reported feeling more positively about evaluation. In addition, the results show that the 4-H organization as a whole has developed a more positive evaluation culture. This framework for teaching evalu- ation has potential value for all organizations interested in developing evaluation capacity in regional educators as well as developing a positive culture of evaluation within the organization.

62 citations


Journal ArticleDOI
TL;DR: The authors describe the development and use of this online approach in the evaluation of a foundation-sponsored program to improve the provision of preventive care in physicians’ offices.
Abstract: Interactive online diaries are a novel approach for evaluating project implementation and provide real-time communication between evaluation staff members and those implementing a program. Evaluati...

Journal ArticleDOI
TL;DR: In this paper, the authors explore the experience of doing practitioner evaluation, including its solitary or collaborative character, insider and outsider ascriptions and achievements, reflective moments regarding competence and capacity, and occasional glimmers of fascination with the work.
Abstract: Practitioner involvement in evaluation, research, development, and other forms of disciplined inquiry that are small scale, local, grounded, and carried out by professionals who directly deliver those services is embraced across a wide range of professions as essential to good professional practice. However, little is known about the character, homogeneity or diversity, outcomes, motives, and practice of this activity. This article explores practitioner evaluation in social work, with an eye toward plausible connections with professional work across the public sector. The authors first explore the experience of doing practitioner evaluation, including its solitary or collaborative character, insider and outsider ascriptions and achievements, reflective moments regarding competence and capacity, and occasional glimmers of fascination with the work. The authors then explore contextualizing practitioner evaluation within its practice, agency, and professional cultures. Their third focus is through the lens of shifting practice and evaluation borderlines. The authors conclude with some provisional discussion of the implications for good practitioner evaluation.

Journal ArticleDOI
TL;DR: This article describes the method, provides a case illustration, offers guidelines for practice, and discusses the method in the context of the evaluation literature on goals and goal setting.
Abstract: Clearly defined and measurable goals are commonly considered prerequisites for effective evaluation. Goal setting, however, presents a paradox to evaluators because it takes place at the interface of rationality and values. The objective of this article is to demonstrate a method for unlocking this paradox by making goal setting a process of evaluating goals, not simply defin- ing them. Goals can be evaluated by asking program stakeholders why their goals are important to them. Systematic inquiry into goals also prepares the ground for setting consensual goals that express what stakeholders really care about. This article describes the method, provides a case illustration, offers guidelines for practice, and discusses the method in the context of the evalua- tion literature on goals and goal setting.

Journal ArticleDOI
TL;DR: This paper conducted a survey of university-based evaluation preparation programs to answer this question and to update findings from prior surveys of university preparation programs for evaluators, and presented the results and implications of the 2002 survey.
Abstract: Entry into professions, such as medicine, law, and the clergy, is typically controlled. In contrast, evaluation has multiple pathways leading into the field. For evaluators, a graduate degree from a university program is one of several ways to become an evaluator. What university programs exist to prepare evaluators and what is their nature? In 2002, we undertook a survey of university-based evaluation preparation programs to answer this question and to update findings from prior surveys of university preparation programs for evaluators. In this article, we briefly review the results of previous surveys, discuss the methods for collecting the current data, and present the results and implications of the 2002 survey.

Journal ArticleDOI
TL;DR: In this article, a review of recent evaluation reports and literature reveals three different meanings of effectiveness in use: increased understanding, accountability, and demonstrated causal linkages, and each meaning of effectiveness is examined in light of its implications for evaluation design.
Abstract: In the present climate of public accountability, there is increasing demand to show “what works” and what return is gained for the public from investments to improve communities. This increasing demand for accountability is being met with growing confidence in the field of philanthropy during the past 10 years that the impact or effectiveness of community initiatives can be measured. A review of recent evaluation reports and literature reveals three different meanings of effectiveness in use: increased understanding, accountability, and demonstrated causal linkages. Drawing on a general analysis of the concepts of accountability, effectiveness, and causality, each meaning of effectiveness is examined in light of its implications for evaluation design. The article closes with a guiding schema for organizations planning evaluations to help determine which kind of effectiveness is important in light of organizational values and aims.

Journal ArticleDOI
TL;DR: In this article, the authors present a stakeholder-driven method, the earliest anticipated timeline of impact, which is designed to assess stakeholder expectations for the earliest time frame in which social progra...
Abstract: The authors present a stakeholder-driven method, the earliest anticipated timeline of impact, which is designed to assess stakeholder expectations for the earliest time frame in which social progra...

Journal ArticleDOI
TL;DR: In this article, a critical review of the quality of 12 large federal program evaluations is presented, focusing on elements of the evaluation design, inclusion of evaluation expertise and the evaluation expertise.
Abstract: This article provides a critical review of the quality of 12 recent large federal program evaluations. The review focused on elements of the evaluation design, inclusion of evaluation expertise amo...

Journal ArticleDOI
TL;DR: The use of the CDC Framework for Program Evaluation in Public Health is presented to create and teach practical evaluation methods to master's of public health students and includes the teaching approach, semester-long syllabus for students, and course evaluation data.
Abstract: Human service fields, and more specifically public health, are increasingly requiring evaluations to prove the worth of funded programs. Many public health practitioners, however, lack the required background and skills to conduct useful, appropriate evaluations. In the late 1990s, the Centers for Disease Control and Prevention (CDC) created the Framework for Program Evaluation in Public Health to provide guidance and promote use of evaluation standards by public health professionals. The emphasis of the Framework is utilization-focused evaluation for program improvement or to assess program impact. This article presents the use of the CDC Framework for Program Evaluation in Public Health to create and teach practical evaluation methods to master’s of public health students. The article includes the teaching approach, semester-long syllabus for students, and course evaluation data; suggests supplementary materials; and discusses implementation issues.

Journal ArticleDOI
TL;DR: The authors assesses the quality of the logic-modeling approach taken by one agency to illustrate how a flawed approach to logic modeling may lead to incorrect conclusions about programs and about the benefits of logic models.
Abstract: The Office of Management and Budget has recommended the termination of numerous federal programs, citing a lack of program results as the primary reason for this decision. In response to this recommendation, several federal agencies have turned to logic modeling to demonstrate that programs are on the path to results accountability. However, approaches to logic modeling in some agencies have not followed the strategies that evaluators recommend as necessary to lead to a high quality logic model. Models of poor quality are unlikely to contribute to improved program accountability or better program results. In this article, the author assesses the quality of the logic-modeling approach taken by one agency to illustrate how a flawed approach to logic modeling may lead to incorrect conclusions about programs and about the benefits of logic models. As a result of this analysis, the author questions the conditions under which capacity building should be considered and whether the field of evaluation needs to be...

Journal ArticleDOI
TL;DR: In this article, the fidelity of replications to the operations and principles of original models is measured using evidence-based practices with the emphasis on the use of evidence based practices has come a need to measure the fidelity.
Abstract: With the emphasis on the use of evidence-based practices has come a need to measure the fidelity of replications to the operations and principles of original models. Recent reviews have focused on ...

Journal ArticleDOI
TL;DR: In this paper, the authors examined to what extent evaluations carried out in a highly government-driven manner can nevertheless contribute to deliberative democracy, taking the Organisation for Economic Co-operation and Development's environmental performance reviews as an example of an expert- led evaluative process built on the ideals of representative democracy.
Abstract: Deliberative democracy has attracted increasing attention in political science and has been suggested as a normative ideal for evaluation. This article analyzes to what extent evaluations carried out in a highly government-driven manner can nevertheless contribute to deliberative democracy. This potential is examined by taking the Organisation for Economic Co-operation and Development's environmental performance reviews as an example of an expert- led evaluative process built on the ideals of representative democracy. The author argues that although they are not participatory, these reviews lay the groundwork for deliberative democracy by "empowering" weaker actors within governments and by improving the factual basis for polit- ical debate and decision making. This example suggests that to enhance deliberative democracy, the evaluation process need not be highly inclusive, dialogical, and deliberative but that a broader view is needed, encompassing the indirect impacts of evaluation on power relations and on the knowledge basis on which decision making relies.

Journal ArticleDOI
TL;DR: In this article, the authors describe how job analysis, a method commonly used in personnel research and organizational psychology, provides a systematic method for documenting program staffing and service delivery, using a set of tools from the job analysis literature.
Abstract: This article describes how job analysis, a method commonly used in personnel research and organizational psychology, provides a systematic method for documenting program staffing and service delive...

Journal ArticleDOI
TL;DR: In this paper, the authors report on an evaluation of a comprehensive school reform initiative in one elementary school intended to provide meaningful and sustainable structure, substance, and support to the school's critical need to improve students' standardized test scores.
Abstract: The authors report on an evaluation of a comprehensive school reform initiative in one elementary school intended to provide meaningful and sustainable structure, substance, and support to the school’s critical need to improve students’ standardized test scores. Serious implementation and communication problems during the year of the evaluation suggested that the reform itself was misdirected. Teachers’ accounts of their participation in the initiative suggested that some experienced it as a confusing set of external programs they were required to implement without adequate training or support. Sometimes, the reform programs actually conflicted with test preparation pressures and activities, and even with one another. The authors struggled as evaluators to find ways to connect to the dynamics of this complicated context, especially ways that could enact their evaluative commitments to responsiveness, the public good, and ethical practice. They were thwarted by apathy, withdrawal, and silence. This article...

Journal ArticleDOI
TL;DR: In this paper, community-based organizations (CBOs), typically small and underfunded with transient staff members, are told by funders to care for clients and verify program value, and they are asked to assist CBOs with evaluations.
Abstract: Community-based organizations (CBOs), typically small and underfunded with transient staff members, are told by funders to care for clients and verify program value. To assist CBOs with evaluations...


Journal ArticleDOI
TL;DR: Christ Christie as discussed by the authors describes how he came across AI in the early 1980s and became a participatory, collabor rned, the more he becam on findings but that it ted reading everything reciative inquiry (Dav applications of AI to e).
Abstract: a A. Christie, Claremont laremont, CA 91711; eIn my experience, wh ays been a bit of a cha reciative inquiry (AI) ck me about the AI ev rview. I’ve never read luation. Could you des l: I came across AI in rs ago and became exc participatory, collabor rned, the more I becam on findings but that it ted reading everything reciative inquiry (Dav applications of AI to e Before we get into the me a brief explanatio l: Sure. AI has been us -1980s. In a nutshell, A rt to design and imple roaches to organization ds, when we look for ge, we often end up fee onents of AI have foun e excited and energize d language, participan perrider, Sorensen, Wh Okay, let’s talk a bit a l: CEDT stands for C rnal training function ional Laboratories for gies, and most of thei uquerque, New Mexic he CEDT department ructor-led classroom t ng, and consulting on a s and offerings had be Christina A. Christie Claremont Graduate University

Journal ArticleDOI
TL;DR: In this article, the authors present a case scenario where an external evaluator is involved in evaluating previous projects of the International Development Agency (IDA) and had identified the successful project that was to be replicated.
Abstract: With great interest, we read and reread the case scenario “When Is an External Evaluator No Longer External?” In the literature, there is no dearth of definitions of internal and external evaluators. For example, Kendall-Tackett (2005) defined an external evaluator as “any individual not directly employed by the program under evaluation” and an internal evaluator as “any staff person directly involved in the program under evaluation, or in the agency in which the program is housed.” Worthen and Sanders (1987) thoroughly examined the advantages and disadvantages of external evaluators and identified an impartial and fresh perspective as the most important benefit of hiring external evaluators. However, as demonstrated in the scenario, distinguishing internal and external evaluation is not so clear cut. In the scenario, Dr. Porto-Novo was involved in evaluating previous projects of the International Development Agency (IDA) and had identified the successful project that was to be replicated. That mere fact makes it difficult, if not impossible, to judge whether Dr. Porto-Novo was an internal or external evaluator. Depending on how we view her employment status, her relationship with IDA, and the terms the contract between Riga (the consulting firm for which she works) and IDA, Dr. Porto-Novo’s role could be both external and internal in evaluating IDA projects, a fact that raises a red flag for ethical issues.



Journal ArticleDOI
TL;DR: In this article, a proposal has been published by a consortium of foundations for an external evaluator to evaluate a replication at two new sites of a program they have been funding for many years.
Abstract: A proposal has been published by a consortium of foundations for an “external” evaluator to evaluate a replication at two new sites of a program they have been funding for many years. A proposal is received from Dr. Porto-Novo, who has been the external evaluator of the initial program for about 10 years. She has developed much of her reputation and that of her group, Riga, Inc., as well as the majority of Riga’s budget, from this work. High-quality proposals are also received from several other fully qualified evaluators. A team consisting of two staff members from the initial program, two foundation staff members, and an external evaluation expert has reviewed the proposals. The team has given several of the proposals high rankings, but it is split about the award. Staff members from the initial program want to award the contract to Riga, but the staff representatives of the foundation argue against this. They explain that they feel that Riga has such a close relationship with the agency that it can no longer be considered external. Serving as the external evaluation expert on the team, I am asked to help break the impasse. I can see advantages and disadvantages to both arguments.