scispace - formally typeset
Search or ask a question

Showing papers in "American Journal of Evaluation in 1999"



Journal ArticleDOI
TL;DR: In this article, the authors examine the contribution that transformative theory can make toward meeting this challenge and propose the use of an inclusive model of evaluation that can address the tension between what is needed to accurately represent the experiences of marginalized groups and the traditional canons of research.
Abstract: Evaluators face a challenge in responding to a call for greater inclusiveness of marginalized groups. In this presentation, I examine the contribution that transformative theory can make toward meeting this challenge. Transformative scholars assume that knowledge is not neutral, but is influenced by human interests, and that all knowledge reflects the power and social relationships within society, and that an important purpose of knowledge construction is to help people improve society. I propose the use of an inclusive model of evaluation that can address the tension between what is needed to accurately represent the experiences of marginalized groups and the traditional canons of research.

155 citations


Journal ArticleDOI
TL;DR: The present paper considers the qualitative approach to evaluation design to be typified by a case study with a sample of just one, and shows that external validity using this qualitative approach would have exceptional strengths.
Abstract: Consider the qualitative approach to evaluation design (as opposed to measurement) to be typified by a case study with a sample of just one. Although there have certainly been elaborate and emphatic defenses of the qualitative approach to program evaluation, such defenses rarely attempt to qualify the approach explicitly and rigorously as a method of impact analysis. The present paper makes that attempt. The problem with seeking to advance a qualitative method of impact analysis is that impact is a matter of causation and a non-quantitative approach to design is apparently not well suited to the task of establishing causal relations. The root of the difficulty is located in the counterfactual definition of causality, which is our only broadly accepted formal definition of causality for social science. It is not, however, the only definition we use informally. Another definition, labeled “physical causality,” is widely used in practice and has recently been formalized. Physical causality can be applied to ...

80 citations


Journal ArticleDOI
TL;DR: A conceptual framework is presented as a tool for putting the program theory into operation and incorporates variables that reflect theoretical concepts and implementation issues addressed in program evaluation.
Abstract: The emphasis on understanding how a program works and what makes a program work led to the development of the theory-driven approach to program evaluation. Theory plays a major role in guiding a program design and evaluation. The theory defines the presenting problem and the target population for whom the program is designed, specifies the causal processes underlying the program effects, and identifies its expected outcomes as well as factors that affect treatment processes. In this article, a conceptual framework is presented as a tool for putting the program theory into operation. The framework incorporates variables that reflect theoretical concepts and implementation issues addressed in program evaluation. The framework organizes the variables into three categories: input, process, and output, and proposes direct and indirect relationships among them. Implications of the framework for program evaluation are discussed.

64 citations


Journal ArticleDOI
TL;DR: This publication evaluates the Comprehensive Child Development Program (CCDP), a case management and home visitation program designed to provide low-income families with educational, health, and social services.
Abstract: Note: This publication evaluates the Comprehensive Child Development Program (CCDP), a case management and home visitation program designed to provide low-income families with educational, health, and social services. Home Based Home Visitation Program Evaluation Child Development Healthy Development Parent Education Public Health Services Case Management Early Childhood Education Early Intervention

58 citations


Journal ArticleDOI
TL;DR: It is argued that the framework of inquiry modes and evaluation purposes provides a common lexicon for evaluators, which may help the field in moving beyond past divisions, and offers a useful approach to evaluation planning.
Abstract: Evaluation has been beset with serious divisions, including the paradigm wars and the seeming segmentation of evaluation practice into distinct evaluation theories and approaches. In this paper, we describe key aspects of an integrative framework that may help evaluators move beyond such divisions. We offer a new scheme for categorizing evaluation methods within four inquiry modes, which are “families” or clusters of methods: description, classification, causal analysis, and values inquiry. In addition, we briefly describe a set of alternative evaluation purposes. We argue that, together with a form of realist philosophy, the framework of inquiry modes and evaluation purposes (1) provides a common lexicon for evaluators, which may help the field in moving beyond past divisions, and (2) offers a useful approach to evaluation planning.

56 citations


Journal ArticleDOI
TL;DR: In this paper, the authors report experience conducting focus groups with deaf and hard of hearing people, from which they reflect on lessons learned about how we can more effectively listen to people in all focus groups, particularly those that don't intentionally include people with hearing loss.
Abstract: Focus groups are a common tool in evaluation. Here we report experience conducting focus groups with deaf and hard of hearing people, from which we reflect on lessons learned about how we can more effectively “listen” to people in all focus groups—particularly those that don’t intentionally include people with a hearing loss. We learned that: (1) focus groups with deaf and hard of hearing people can be highly productive on even the most sensitive issues and across disparate socioeconomic and ethnic differences if they share a common interest and mode of communication; (2) the physical environment of group communication may count more than we usually notice; (3) insuring communication requires a very high level of vigilance by moderators and observers; (4) it is hard to “focus” focus groups in some cultures, so one must allow for that; (5) genuine communication may require more time and patience than one might expect; (6) confidentiality in a marginalized community may require special attention; (7) an exp...

43 citations



Journal ArticleDOI
TL;DR: In this paper, the authors argue that traditional frameworks, including traditional conceptions of race, hinder our ability to evaluate culturally and socially different worlds and realities, in this case, those created and transformed by people of color.
Abstract: This is an essay about the racialized conventional wisdoms and the academic traditions that impede the ability of social scientists to adequately explain and evaluate the colorization of the life worlds around them, particularly in terms of explanations involving power, privilege, and empowerment. I conclude that it is impossible to discuss adequately the more technical issues of technique and measurement, until we grasp the epistemological and biographical problematics of social sciences and their uses. Traditional frameworks, including traditional conceptions of race, hinder our ability to evaluate culturally and socially different worlds and realities, in this case, those created and transformed by people of color. It is especially important to find alternatives in a society and globe that are becoming increasingly colorized in terms of demographics, power and authority.

40 citations


Journal ArticleDOI
TL;DR: It is argued that the development and implementation of a certification process can succeed only if pursued patiently and incrementally, with adequate time for AEA to test the feasibility of each of the definitional and procedural processes that would need to undergird such a system.
Abstract: There are many clear-cut reasons why it would be desirable to have a certification process for evaluators, both to help improve the practice of evaluation and to further the field’s maturation into a full-fledged profession. But there are also many strong arguments against attempting to launch an evaluator certification system, at least at this time. The arguments for and against AEA attempting to set up an evaluator certification system are discussed, including two contextual changes that have led to a significant shift in my thinking on this topic over the last three decades. Also presented are four major challenges that would need to be overcome in any effort to develop a viable evaluator certification system. In light of these challenges, I argue that the development and implementation of a certification process can succeed only if pursued patiently and incrementally, with adequate time for AEA to test the feasibility of each of the definitional and procedural processes that would need to undergird such a system. Finally, an agenda of steps is suggested that AEA might take to determine if our field and our collective wisdom have matured sufficiently for us to succeed in such an important undertaking.

36 citations


Journal ArticleDOI
TL;DR: In 1997, the Board of Directors of the American Evaluation Association received a report from a Task Force that examined the potential of certifying evaluators as mentioned in this paper, including a clarification of relevant terms, a review of the status of evaluation as a profession, and a discussion of how other professions certify their practitioners.
Abstract: In 1997, the Board of Directors of the American Evaluation Association received a report from a Task Force that examined the potential of certifying evaluators. Key topics in the report are summarized here. These include: a clarification of relevant terms, a review of the status of evaluation as a profession, and a discussion of how other professions certify their practitioners. Future options are discussed. One option, that of maintaining the status quo (i.e., no action in regard to certification), was seen by the Task Force as having serious, long-term consequences for evaluation. A preliminary look is taken at the advantages and disadvantages of developing a process to certify evaluators.

Journal ArticleDOI
TL;DR: In this article, a framework for theory-based discrepancy evaluation is presented, which is used to determine the extent of discrepancy that exists between the expected theory, as constructed using the tools introduced here, and what is actually observed in the evaluation.
Abstract: Early proponents of theory-based evaluation have provided strong reasoning for this approach. Still, tools are needed for implementing it in practice. This paper helps fill the gap by providing strategies for constructing theories. Illustrations are given in the area of public health. The suggested techniques are designed to systematize and bring objectivity to the process of theory construction. Also introduced is a framework that illustrates different levels, and processes within each level, that should be considered when constructing program theories. The framework is valuable for theory-based discrepancy evaluation, the essence of which is to determine the extent of discrepancy that exists between the expected theory, as constructed using the tools introduced here, and what is actually observed in the evaluation.

Journal ArticleDOI
TL;DR: A cost effectiveness analysis showed that this local school-based strategy for obtaining parental consent for program evaluation was more cost effective than in previous studies, however, more than 20% of the data collection costs involved obtaining active written consent.
Abstract: This study assesses the effectiveness of a strategy for obtaining active written parental consent for the outcome evaluation of an alcohol, tobacco, and other drug (ATOD) abuse prevention program. A local school-based strategy that was implemented in 16 middle schools in ten rural and suburban school districts is presented. Using a multiple case study approach and an adequacy of performance analysis, it was found that seven of the ten districts achieved a minimum consent rate goal set at 70% (ranged from 53% to 85%, average rate of 72%). Only two districts achieved a desired consent rate of 80%. Interviews with a key contact person in each school district provided profile information that distinguished districts that were successful in implementing an active parental consent strategy from those that were not successful. A cost effectiveness analysis showed that this local school-based strategy for obtaining parental consent for program evaluation was more cost effective than in previous studies. However, more than 20% of the data collection costs involved obtaining active written consent. Methodological and practical implications are discussed.



Journal ArticleDOI
TL;DR: A voluntary system for credentialing evaluators is described in this paper, where the authors examine the urgent need for such a system in the field of evaluation, as well as various concerns regarding credentialing.
Abstract: A voluntary system for credentialing evaluators is described. I examine the urgent need for such a system in the field of evaluation, as well as various concerns regarding credentialing. The paper also includes an unabridged and adapted version of a table that was used in a debate on certification at the 1998 annual meeting of the American Evaluation Association in Chicago. The table is helpful in understanding both the pro and con sides of the credentialing issue.



Journal ArticleDOI
TL;DR: In this article, the authors present an argument supporting the position that AEA should not undertake a certification or similar process at the present time, even though they believe the public needs some type of protection from unscrupulous and incompetent evaluators.
Abstract: In this paper I present an argument supporting the position that AEA should not undertake a certification or similar process at the present time, even though I believe the public needs some type of protection from unscrupulous and incompetent evaluators. The problems a certification process would solve have not been substantiated; the public may not support such an endeavor; “evaluation” is not clearly defined, which means the knowledge and skills (K/S) unique to its practice and performance criteria are not agreed upon; and the process for putting a certification system into place is a very complex and costly undertaking (e.g., ensuring the accuracy and integrity of determining who does and does not have the requisite K/S and who is applying them in an effective manner). In addition, certification would require establishing some way to decertify incompetent practitioners, and being able to defend all of these actions when there is disagreement and even litigation. I believe that AEA is not likely to be able to do these things in an effective manner, in the near future.

Journal ArticleDOI
TL;DR: This framework and lessons learned may also, in this era of performance measurement and public accountability, be generalizable beyond HIV prevention to the comprehensive and strategic evaluation of other politically complex, publically-funded disease prevention and health promotion programs.
Abstract: The 21st century brings with it the 20th year of the human immunodeficiency virus (HIV) epidemic in the United States. HIV prevention programs have matured; however, evaluations of those programs have lagged behind. Nationwide, the need for such evaluation has never been greater. It is time to comprehensively assess the status of HIV prevention and control. We must build on previous studies to create a comprehensive, integrated national picture that includes evaluations at national, state, and local levels of the quality, costs, and short- and long-term effectiveness of various HIV prevention programs and policies. The Centers for Disease Control and Prevention (CDC) encourages a phased approach to implementing a comprehensive evaluation strategy. This paper, which describes the 1995–1997 evaluation framework and activities of the Program Evaluation Research Branch, National Center for HIV, Sexually Transmitted Disease (STD), and Tuberculosis (TB) Prevention, is offered as a platform on which future efforts in determining the most effective means to prevent HIV can be built. Lessons learned in developing this comprehensive evaluation framework have advanced HIV prevention. This framework and lessons learned may also, in this era of performance measurement and public accountability, be generalizable beyond HIV prevention to the comprehensive and strategic evaluation of other politically complex, publically-funded disease prevention and health promotion programs.


Journal ArticleDOI
TL;DR: In this article, the results of a survey carried out by a recent AEA Task Force on certification are presented, along with the results from a survey conducted by the authors of the survey.
Abstract: Professional certification is sometimes advocated as a means of assuring consumers that they are getting someone who is skilled and knowledgeable within their trade. Certification is also sometimes viewed as advantageous for enhancing professions’ prestige, promoting professionalism, improving academic programs, and helping to define a profession. Without the acceptance by an organization’s members, however, any efforts to implement a certification process are likely instead to be divisive and dysfunctional. This article presents the results of a survey carried out by a recent AEA Task Force on certification.

Journal ArticleDOI
TL;DR: Cousins and Earl as mentioned in this paper show how Smith categorically missed the boat, the central premise of our book, and demonstrate how instead, she purchased passage on what promised to be a luxury cruise and then sailed off into a sea of disappointment on a vessel that was barely seaworthy.
Abstract: From time to time, articles are published in AJE that evoke comments from readers. In Response is reserved for this dialogue. Contributions should be to the point, concise, and easy for readers to track to targeted articles. Comments may be positive or negative, but if the latter, then keep them at least relatively nice! Personal attacks and offensive, degrading criticisms will not be published. Please keep the length of comments to the minimum essential We've all seen it before. It happens more and more all the time. Someone sits down to review a book and in doing so they become so enthralled, so energized, so consumed, perhaps even incensed by the content, that the project mushrooms into something that extends well beyond the normal journal space allotted. Behold the article-length critique! M.F. Smith apparently had such an experience reviewing our edited collection, Participatory Evaluation in Education: Studies in Evaluation Utilization and Organizational Learning (1995, Falmer). Readers of the American Journal of Evaluation are particularly fortunate when this sort of thing happens. The Journal provides for the section called Forum, 1 a natural home not only for the article-length critique but for an invited response by the original authors (or editors as the case may be). Forum also promotes ongoing healthy deliberation and dialogue about important, often current, and almost invariably controversial issues facing evaluation practitioners, researchers and theoreticians, alike. Often such dialogue can be traced to a well done, penetrating initial critique. And so, we were intrigued and excited when the editor of AJE apprized us of this developing scenario and invited us to consider responding to Smith publicly. With great anticipation we waited to receive the manuscript, eager to understand her reaction to the book and her challenges to us on a topic with which we—Cousins and Earl—share considerable passion. We were to be gravely disappointed. In this response, we begin by demonstrating how Smith categorically missed the boat, the central premise of our book. We show how instead, she purchased passage on what promised to be a luxury cruise and then sailed off into a sea of disappointment on a vessel that was barely seaworthy. We're not going to take that cruise with M.F. Smith. We don't want to know about that voyage. Her critique is ill-founded and therefore undeserving of a point-by-point response. But there is much worthy of being said on this topic and so we …

Journal ArticleDOI
TL;DR: The combination of programmatic theory and structural equation modeling (SEM) can act as the basic intellectual machinery for designing and evaluating behavioral interventions as discussed by the authors, which is the case study of a randomized experiment to reduce sexual risk taking, the WINGS Project.
Abstract: The combination of programmatic theory and structural equation modeling (SEM) can act as the basic intellectual machinery for designing and evaluating behavioral interventions. As an example of the integration, we consider a case study of a randomized experiment to reduce sexual risk taking, the WINGS Project. Barriers to combining systematic use of SEM with programmatic theory for program improvement and assessing program effectiveness are also discussed.

Journal ArticleDOI
TL;DR: In this paper, the authors argue that credentialing and certification are processes that can help the field of evaluation establish a clearer identity as a profession, and they will help AEA further establish its presence as the premier organization in evaluation.
Abstract: Credentialing and certification are processes that can help the field of evaluation establish a clearer identity as a profession. They will help AEA further establish its presence as the premier organization in evaluation. Altschuld (this issue) proposes a sensible approach, focusing first on credentialing. I argue that we should proceed now.



Journal ArticleDOI
TL;DR: The evaluation of Project TEAMS was submitted to The American Journal of Evaluation as mentioned in this paper, where the evaluation used a program logic model to guide data collection and reporting activities and relied on multiple sources of data.
Abstract: In August 1997, a series of messages on EVALTALK, the American Evaluation Association's electronic discussion group, emphasized the need for meta-evaluation. To prompt public critique and discussion, this description of the evaluation of Project TEAMS was submitted to The American Journal of Evaluation. The evaluation used a program logic model to guide data collection and reporting activities and relied on multiple sources of data. Because of implementation problems in the program and the inability of the evaluation's methods to support strong inferences, the evaluation's conclusions about the program were mixed and tentatively stated. The public review by independent evaluators of an evaluation that is not exemplary, but may be typical in its strengths and weaknesses, is expected to demonstrate the utility of meta-evaluation to evaluators and evaluation clients and audiences.


Journal ArticleDOI
TL;DR: In a statewide conference on alternative methods for assessing students’ learning, responsive evaluation methods were incorporated into the structure of the conference and served as a pilot study that illustrates the possible utility of these techniques in evaluating conferences.
Abstract: In a statewide conference on alternative methods for assessing students' learning, we incorporated responsive evaluation methods into the structure of the conference. The application of these interactive evaluation techniques serves as a pilot study that illustrates the possible utility of these techniques in evaluating conferences. This paper provides a brief review of the literature surrounding responsive evaluation, a description of the responsive evaluation methods applied to this conference, and a discussion of the results and implications of this pilot study.