scispace - formally typeset
Search or ask a question

Showing papers in "American Journal of Evaluation in 2007"


Journal ArticleDOI
TL;DR: In this paper, the authors discuss the need for community-based organizations to provide evaluation information to government, foundations, and others, but the demand for this information may be increasing, although the field of evaluation may not be increasing.
Abstract: Increasingly, government, foundations, and others are asking community-based organizations for more evaluation information. Although the demand for this information may be increasing, the field kno...

197 citations


Journal ArticleDOI
TL;DR: The authors hope that REAM becomes a valuable addition to evaluators' toolkits and introduce readers to a family of techniques with which they may be unfamiliar, highlight their strengths and limitations, and suggest appropriate contexts for use.
Abstract: A central issue in the use of rapid evaluation and assessment methods (REAM) is achieving a balance between speed and trustworthiness. In this article, the authors review the key differences and common features of this family of methods and present a case example that illustrates how evaluators can use rapid evaluation techniques in their own work. In doing so, the authors hope to (a) introduce readers to a family of techniques with which they may be unfamiliar, (b) highlight their strengths and limitations, and (c) suggest appropriate contexts for use. Ultimately, the authors hope that REAM becomes a valuable addition to evaluators' toolkits.

168 citations


Journal ArticleDOI
TL;DR: Empowerment evaluation continues to crystallize central issues for evaluators and the field of evaluation as mentioned in this paper, and a highly attended American Evaluation Association conference panel, titled "EmpOWERment Ev...
Abstract: Empowerment evaluation continues to crystallize central issues for evaluators and the field of evaluation. A highly attended American Evaluation Association conference panel, titled “Empowerment Ev...

127 citations


Journal ArticleDOI
TL;DR: This research presents a meta-answers to the challenge of determining the best way to disseminate evaluation findings to potential users in formats that facilitate use of the findings.
Abstract: Use of evaluation findings is a valued outcome for most evaluators. However, to optimize use, the findings need to be disseminated to potential users in formats that facilitate use of the informati...

111 citations


Journal ArticleDOI
TL;DR: In this article, the authors describe an approach to demystifying and assessing interpersonal collaboration and use their consultancy work with school improvement stakeholders to illustrate a multistage collaboration evaluation process.
Abstract: “Collaboration” is a ubiquitously championed concept and widely recognized across the public and private sectors as the foundation on which the capacity for addressing complex issues is predicated. For those invested in organizational improvement, high-quality collaboration has become no less than an imperative. However, evaluators and program stakeholders often struggle to assess the quality of collaborative dynamics and the merits of collaborative structures. In this article, the authors describe an approach to demystifying and assessing interpersonal collaboration and use their consultancy work with school improvement stakeholders to illustrate a multistage collaboration evaluation process. Evaluators in a wide range of organizational settings are encouraged to utilize collaboration theory and the evaluation strategies presented herein to cultivate stakeholder capacity to understand, examine, and capitalize on the power of collaboration.

102 citations


Journal ArticleDOI
TL;DR: In this article, the authors address the link between building evaluation capacity and enhancing evaluation use for learning and demonstrate the need for supportive organizational contexts, structures, and processes to facilitate an individual's ability to learn from evaluation.
Abstract: This action research study addresses the link between building evaluation capacity and enhancing evaluation use for learning. The author shares her experiences and reports on the evidence collected from a set of self-evaluation capacity-building interventions that she implemented. Using a mixed-method approach, the author first determined that the organization lacked the prerequisites to learn from evaluation. She then implemented self-evaluation capacity-building interventions, accompanied by a real-time study of how the participants changed attitudinally, cognitively, and behaviorally. Results indicate the potential of these interventions to facilitate an individual’s ability to learn from evaluation and underscore the need for supportive organizational contexts, structures, and processes if evaluation use for learning is to occur throughout the organization.

90 citations


Journal ArticleDOI
TL;DR: In this article, the state of practice of evaluability assessment (EA) as represented in the published literature from 1986 to 2006 was presented, and three EA studies were located, showing that EA was c...
Abstract: This article presents the state of practice of evaluability assessment (EA) as represented in the published literature from 1986 to 2006. Twenty-three EA studies were located, showing that EA was c...

75 citations


Journal ArticleDOI
TL;DR: This simulation study examines the reported influence of evaluation information on decision makers’ potential actions in a set of scenarios derived from actual evaluation studies to indicate that participants were influenced by all types of information.
Abstract: Using a set of scenarios derived from actual evaluation studies, this simulation study examines the reported influence of evaluation information on decision makers’ potential actions. Each scenario...

71 citations


Journal ArticleDOI
TL;DR: In this article, a mixed-methods evaluation model based on the qualitative method phenomenography is proposed to evaluate how learners think in multiple contexts, from skills training to employee development to higher education, and how their thinking may change over time.
Abstract: Increasing calls for accountability in education have promoted improvements in quantitative evaluation approaches that measure student performance; however, this has often been to the detriment of qualitative approaches, reducing the richness of educational evaluation as an enterprise. In this article the authors assert that it is not merely performance but also how learners think and how their thinking changes that we should be measuring in educational program evaluation. They describe a mixed-methods evaluation model based on the qualitative method phenomenography that can be used to evaluate how learners think in multiple contexts, from skills training to employee development to higher education, and how their thinking may change over time. They then describe two evaluation studies making use of this approach and provide suggestions for evaluators interested in using the phenomenographic model.

65 citations


Journal ArticleDOI
TL;DR: In this article, the authors describe, classify, and comparatively evaluate national models and mechanisms used to evaluate research and allocate research funding in 16 countries, in terms of their validity, credibility, utility, cost-effectiveness and ethicality.
Abstract: This research describes, classifies, and comparatively evaluates national models and mechanisms used to evaluate research and allocate research funding in 16 countries. Although these models and mechanisms vary widely in terms of how research is evaluated and financed, nearly all share the common characteristic of relating funding to some measure of past performance. Each of these 16 national models and mechanisms were rated by independent, blinded panels of professional researchers and evaluators in two countries on more than 25 quality indicators. The national models were then ranked using the panels' ratings, in terms of their validity, credibility, utility, cost-effectiveness, and ethicality. The highest ratings were received by nations using large-scale research assessment exercises. Bulk funding and indicator-driven models received substantially lower ratings. Implications for research evaluation practice and policy are considered and discussed.

59 citations


Journal ArticleDOI
TL;DR: In this paper, the authors make the axiological assumption that the social justice theory of ethics leads to an awareness of the need to be "sensitive" to the needs of others.
Abstract: Should the Russians be included in the evaluation and, if so, how can that be done? Based on the axiological assumption that the social justice theory of ethics leads to an awareness of the need to...

Journal ArticleDOI
TL;DR: The authors compared student achievement effect sizes and found that systems in which student performance in math and reading is rapidly assessed between 2 and 5 times per week are 4 times as effective as a 10% increase in per pupil expenditure.
Abstract: Comparisons of student achievement effect sizes suggest that systems in which student performance in math and reading is rapidly assessed between 2 and 5 times per week are 4 times as effective as a 10% increase in per pupil expenditure, 6 times as effective as voucher programs, 64 times as effective as charter schools, and 6 times as effective as increased accountability. Achievement gains per dollar from rapid assessment are even greater—193 times the gains that accrue from increasing preexisting patterns of educational expenditures, 2,424 times the gains from vouchers, 23,166 times the gains from charter schools, and 57 times the gains from increased accountability. Two sensitivity analyses suggest that the relative advantage for rapid assessment is not sensitive to the particular parameter estimates.

Journal ArticleDOI
TL;DR: The step-by-step process for implementation of Photolanguage is provided, the possibility of uses for Photolanguages as a tool for small-group evaluation is explored, and data is supplied showing its benefits as a method of evaluation.
Abstract: Program evaluators need methods that can be used in a variety of situations to help gather data. Photolanguage is one such technique that uses black-and-white photographs to elicit responses from individuals. It is particularly useful in situations where the respondents may give restricted or only minimal data. This article provides 1the step-by-step process for implementation of Photolanguage, explores the possibility of uses for Photolanguage as a tool for small-group evaluation, and supplies data showing its benefits as a method of evaluation.

Journal ArticleDOI
TL;DR: In this paper, the authors argue that evidence from effectiveness research should be graded on different design dimensions, accounting for conceptualization and execution aspects of a study, and that well-implemented, phased designs using multiple research methods carry the highest potential to yield the best grade of evidence on effects of complex, field interventions.
Abstract: This article argues with a literature review that a simplistic distinction between strong and weak evidence hinged on the use of randomized controlled trials (RCTs), the federal “gold standard” for generating rigorous evidence on social programs and policies, is not tenable with evaluative studies of complex, field interventions such as those found in education. It introduces instead the concept of grades of evidence, illustrating how the choice of research designs coupled with the rigor with which they can be executed under field conditions, affects evidence quality progressively. It argues that evidence from effectiveness research should be graded on different design dimensions, accounting for conceptualization and execution aspects of a study. Well-implemented, phased designs using multiple research methods carry the highest potential to yield the best grade of evidence on effects of complex, field interventions.

Journal ArticleDOI
TL;DR: In this article, empowerment evaluation can be viewed as an ideology that promotes a particular set of social and professional values, and judging the quality and utility of empowerment evaluation thus requires a critical appraisal of the implications of adopting those values.
Abstract: As with many forms of evaluation, empowerment evaluation can be viewed as an ideology that promotes a particular set of social and professional values. Judging the quality and utility of empowerment evaluation thus requires a critical appraisal of the implications of adopting those values.

Journal ArticleDOI
TL;DR: In this paper, a transformation of evaluation priorities is needed: evaluation frameworks should give more weight to alignment with the millennium development goals, impact measures of development programs should be aggregated to the country and global levels, accountability should be enhanced by sharper attributions of results according to the distinctive accountabilities of development partners, attribution of results to aid should be examined using methods appropriate to the situation, and the asymmetry of the development evaluation agenda should be remedied by a sharper focus on the impact of rich countries' policies on global poverty reduction.
Abstract: The millennium development goals have created new challenges for development evaluation. The main unit of account has shifted to the country level. Evaluation ownership must move from donor agencies to developing countries. The recognition that rich countries have development obligations is opening up evaluation frontiers beyond aid. A transformation of evaluation priorities is needed: (a) Evaluation frameworks should give more weight to alignment with the millennium development goals, (b) impact measures of development programs should be aggregated to the country and global levels, (c) accountability should be enhanced by sharper attributions of results according to the distinctive accountabilities of development partners, (d) attribution of results to aid should be examined using methods appropriate to the situation, and (e) the asymmetry of the development evaluation agenda should be remedied by a sharper focus on the impact of rich countries' policies on global poverty reduction.

Journal ArticleDOI
TL;DR: In this paper, the authors search relevant databases to identify 103 cost studies in education and then reduce the set to 31 using criteria focused on rigor in determining program effects and assessment of costs and found that cost studies provide evidence of the worth of educational spending at the macro and individual program levels, information that is not provided by other evaluation approaches.
Abstract: Cost studies are program evaluations that judge program worth by relating program costs to program benefits. There are three sets of strategies: cost—benefit, cost-effectiveness, and cost-utility analysis, although the last appears infrequently. The authors searched relevant databases to identify 103 cost studies in education and then reduced the set to 31 using criteria focused on rigor in determining program effects and assessment of costs. They found that cost studies provide evidence of the worth of educational spending at the macro and individual program levels, information that is not provided by other evaluation approaches; provide direction for program improvement that differs from recommendations based solely on effect sizes; and contribute to knowledge development by constructing and testing models that link spending to student learning.

Journal ArticleDOI
TL;DR: In this article, a multilevel growth model was applied to data describing students and classroom sites to demonstrate how the multi-level modeling framework can be used to analyze data obtained from a multisite program implementation.
Abstract: The evaluation of an intervention delivered across multiple treatment sites presents a unique opportunity for evaluators to gauge the manner and degree to which the “impact” of treatment varies across implementation conditions and different target populations. However, the availability of implementation data for each treatment site, while presenting the opportunity for more sophisticated impact assessment, also presents an analytic challenge. In the following, multilevel growth models were applied to data describing students and classroom sites to demonstrate how the multilevel modeling framework can be used to analyze data obtained from a multisite program implementation. Results of the investigation indicated that increased adherence to the program model was not associated with more positive recipient outcomes. Further examination of the null finding indicated that the highest and lowest rates of literacy growth observed in the study were concentrated in several low-implementing sites. Implications for ...


Journal ArticleDOI
TL;DR: In Papa Andina, a regional network that works to reduce rural poverty in the Andean region by fostering innovation in p...Horizontal evaluation combines self-assessment and external evaluation by peers as discussed by the authors.
Abstract: Horizontal evaluation combines self-assessment and external evaluation by peers. Papa Andina, a regional network that works to reduce rural poverty in the Andean region by fostering innovation in p...

Journal ArticleDOI
TL;DR: Problem-based learning as discussed by the authors is an experiential learning approach that can integrate the need to balance self-study of theory and practice, along with the need for familiarizing students with the dynamic, interactive nature of program evaluation.
Abstract: As the evaluation profession has continued to grow and develop, there has been a corresponding concern about how to properly train future evaluation practitioners. Those who teach evaluation strive to develop training opportunities that create the appropriate balance of the practical, how-to knowledge of evaluation with the burgeoning theoretical knowledge that undergirds responsible evaluation practice. There is a recognized need to move beyond traditional teaching methods to ones that are more engaging and “hands on” to help students understand the interactive nature of program evaluation. Problem-based learning is an experiential learning approach that can integrate the need to balance self-study of theory and practice, along with the need to familiarize students with the dynamic, interactive nature of program evaluation. This article serves as an introduction to the problem-based learning approach and describes this instructional method applied to teaching a graduate-level course in evaluation procedures.

Journal ArticleDOI
TL;DR: This paper used propensity scores to identify subgroups of individuals most likely to experience a reduction in cash benefits because of sanctions in some of the programs that make up the National Evaluation of Welfare-to-Work Strategies.
Abstract: This article uses propensity scores to identify subgroups of individuals most likely to experience a reduction in cash benefits because of sanctions in some of the programs that make up the National Evaluation of Welfare-to-Work Strategies. It extends program evaluation methodology by using propensity scoring to identify the subgroups of sanctioned and nonsanctioned welfare recipients. Specifically, the propensity score is used to identify the sample subset most likely to experience program sanction. In this application, the propensity score helps deal with an omitted variable problem, that of not knowing what the sanction status is in the control group (because they were not subject to the policies being tested). Findings reveal that being high sanction risk induces greater work levels and therefore higher earnings, but it also results in receiving less cash assistance so that sanctioned recipients have roughly the same net incomes as nonsanctioned ones.


Journal ArticleDOI
TL;DR: This article used the context of the university classroom and a writing sample to demonstrate how reflective self-reflection can help students examine personal perspectives that surface during the research process and monitor bias.
Abstract: The necessities and benefits of reflexivity are now well laid out in the broader social science literature, and the American Evaluation Association's (2004) Guiding Principles for Evaluators identify reflective practices that evaluators are expected to carry out. This article uses the context of the university classroom and a writing sample to demonstrate how disciplined self-reflection can help students examine personal perspectives that surface during the research process and monitor bias. Failure to develop and maintain a reflective stance can result in a variety of ethical and practical dilemmas. Fortunately, written reflections and classroom discussions can help screen for potential dilemmas and point the evaluation in more appropriate directions. A preliminary list of readings and classroom activities is included to help faculty guide students in their exploration, monitoring, and constructive use of personal perspectives.


Journal ArticleDOI
TL;DR: In an attempt to build a worldwide evaluation community, English evaluation journals are best positioned to promote international dialogue and increasingly provide international exchanges that deepen the understanding of the evaluation community as discussed by the authors.
Abstract: In an attempt to build a worldwide evaluation community, English evaluation journals are best positioned to promote international dialogue and increasingly provide international exchanges that deep...

Journal ArticleDOI
TL;DR: In this article, several principles for youth-involved research and evaluation are outlined and applied to Dr. Luanda's case to explore how she might proceed and to consider how she could have avoided this ethical challenge.
Abstract: Including youth in the evaluation process can enhance the quality of the inquiry and be empowering for the participants, but it is not without challenges. In this article, several principles for youth-involved research and evaluation are outlined. These come from those who are pioneering this approach and from current research on the dilemmas of practice encountered by youth program leaders. These guidelines, along with the American Evaluation Association's Guiding Principles, are then applied to Dr. Luanda's case to explore how she might proceed and to consider how she might have avoided this ethical challenge.


Journal ArticleDOI
TL;DR: A rigorous, sequential approach to e-mail interview analysis based on three hierarchical language levels—lexical, semantic, and pragmatic—to save meanings that might otherwise have been lost because of limited data availability is proposed.
Abstract: Despite the best of intentions, qualitative researchers can be faced, in some circumstances, with having to make meaning from thin, or less than optimal, data. Using a real study as context, the au...