scispace - formally typeset
Search or ask a question

Showing papers in "Educational Evaluation and Policy Analysis in 1980"


Journal ArticleDOI
TL;DR: In this article, the National Institute of Education (NIE) requested proposals for the design of a study to assess the effectiveness of individualize d instruction as it is currently used by com- pensatory programs in schools.
Abstract: In 1974, Congress directed the National Institute of Education to perform a general examination of compensatory education. Included in that legislation was the spe­ cific directive to conduct a detailed study of the effectiveness of materials and pro­ cedures for meeting the educational needs of individual children. As a result of that Congressional mandate, NIE requested "proposals for the design of a study to as­ sess the effectiveness of individualize d instruction as it is currently used by com­ pensatory programs in schools" (NIE, 1975, p. 4). In that request for proposals, the NIE

199 citations


Journal ArticleDOI
TL;DR: The field of educational evaluation has developed dramatically over the past ten years as discussed by the authors, following a period of relative inactivity in the 1950s, educational evaluation efforts experienced a resurgence in the mid 1960s, influenced by articles by Cronbach, Scriven, Stake, and Stufflebeam.
Abstract: The field of educational evaluation has developed dramatically over the past ten years. Following a period of relative inactivity in the 1950s, educational evaluation efforts experienced a period of revitalization in the mid 1960s. This revitalization was influenced by articles by Cronbach (1963), Scriven (1967), Stake (1967), and Stufflebeam (1966). The field’s development was further stimulated by the evaluation requirements of the Great Society programs that were launched in 1965; by the nationwide accountability movement that began in the early 1970s; and, most importantly, by the mounting responsibilities and resources that society assigned to educators.

177 citations


Journal ArticleDOI
TL;DR: In this article, a longitudinal case study approach (10 years or more) is proposed to trace implementation from policy formation through the measurement of a program's impact on intended recipients at a single point in time.
Abstract: W h a t directions should public policy implementation research take in the 1980's? Certainly the embryonic and interdisciplinary field of implementation research will benefit from experimentation with and evaluation of numerous research approaches. Still largely unexplored are cross-program comparative case studies and statistical prediction models of implementation. There is also much to be learned from macrocase study analyses tracing implementation from policy formation through the measurement of a program's impact on intended recipients at a single point in time. We also are benefitting from an increased number of in-depth descriptions of individual subunits within the implementation scenario. One largely overlooked direction, however, appears to hold considerable promise in the 80's. We believe a longitudinal case study approach (10 years or more) merits serious consideration. An extended time line of 10 years or more

99 citations


Journal ArticleDOI
TL;DR: The history of automated instruction goes back at least to 1860 when Halcyon Skinner developed and patented a device to teach spelling as mentioned in this paper, and it was used by Pressey to teach phonics.
Abstract: I t is 25 years now since B. F. Skinner (1954) first called for a technological revolution in education. In his classic article, \" 'The science of learning and the art of teaching,\" he criticized conventional teaching methods, and argued that mechanical devices were needed to make teaching more effective. Skinner also reported that programmed machines already developed by himself and his students at Harvard could present lessons in a series of small, simple steps, and could provide learners with immediate reinforcement after each successful step. In Skinner's view, such machines would transform education. They would insure student mastery of material to be learned, and they would reduce the amount of punishment, frustration, and boredom in schools. They would make teaching efficient and learning joyful. B. F. Skinner was not the first person to develop a machine to assist teachers (Silberman, 1962). The history of automated instruction goes back at least to 1860 when Halcyon Skinner developed and patented a device to teach spelling. In 1915, Pressey developed a teaching machine that performed all the essential tasks carried out by later programmed devices. Like B. F. Skinner's machines, Pressey's teaching device displayed material, required a response from the learner, and provided reinforcement. Nor was Skinner the first to foresee an educa-

65 citations



Journal ArticleDOI
TL;DR: In this paper, the authors distinguish between two aspects of value: merit and worth, and argue that worth can be determined only in relation to an actual context, and that worth must be assessed by a separate evaluation in each context and cannot be established without an intimate knowledge of local social, cultural, political, and value factors.
Abstract: In this paper, we distinguish between two aspects of value: merit and worth. Merit, we argue, is context-free, but worth can be determined only in relation to an actual context. If so, worth must be assessed by a separate evaluation in each context, and cannot be established without an intimate knowledge of local social, cultural, political, and value factors. Evaluation studies of worth thus require, we contend, field-oriented, qualitative, naturalistic methodologies rather than the more conventional, experimental, quantitative approaches that have characterized evaluation practice heretofore.

38 citations



Journal ArticleDOI
TL;DR: In this paper, the evaluations of Title IV-C projects in the State of Minnesota funded in FY 1975 and FY 1976 are the focus of this study; its purpose is to examine educat ional evaluation utilization by addressing the following questions: (1) What is the reported level of utilization of the 47 evaluations examined? (2) What factors are related to the reported levels of utilization? and (3) How is the level of utilisation related with the role of eva lua t ion in the process of change?
Abstract: Evaliiations were in tended to be an integral part of the national effort to substantially upgrade our educational system under the provisions of the Elementary and Secondary Educa t ion Act of 1965 (ESEA), but there is some doubt whether it actually played that role. The evaluations of Title IV-C projects in the State of Minnesota funded in FY 1975 and FY 1976 are the focus of this study; its purpose is to examine educat ional evaluation utilization by addressing the following questions: (1) What is the reported level of utilization of the 47 evaluations examined?, (2) What factors are related to the reported level of utilization?, and (3) How is the level of utilization related to the role of eva lua t ion in the process of p lanned change? Before describing the study, a review of the utilization l i terature wil l be provided as the structure in which the study is embedded.

30 citations


Journal ArticleDOI
TL;DR: In this article, the authors define the unique characteristics of one evaluator in contrast to another; each evaluation position is demarked from others by the way it handles the value questions poised in this section.
Abstract: To some people, the social sciences should be value free. And of all the methods in the social sciences that ought to be value free, evaluation should be most of all. After all, it is the final gatekeeper that clears applications of social science knowledge for practical use as policy. But those who have thought seriously about science realize scientists must make many choices. Not all such choices are automatically and completely determined by the logic of the steps of the "scientific method." They involve judgment, judgments such as what is important and what is not, what shall be studied, what shall be observed, what corrected or controlled for, what result is of practical significance and to whom? All these judgments involve the weighing of various factors and deciding what is best in the situation to attain some kind of worthy goal. It is worth noting, that it is in the act of making these judgments that evaluators demonstrate their professionalism and their skill. These judgments define the unique characteristics of one evaluator in contrast to another; each evaluation position is demarked from others by the way it handles the value questions poised in this section. In one sense, it is these questions augmented to form a completely descriptive set, which would come closest to uniquely defining the act of evaluation. But the point to be made here is that values are involved in the social sciences and, thus, we must ask not "whether" but "how" they are involved in evaluation.

29 citations


Journal ArticleDOI
TL;DR: In this paper, the authors argue that a sophisticated classification system must result from thought at a sophisticated scientific level, and not from the kind of committee deliberations from which taxonomies of behavior have been derived.
Abstract: Editor's Note: I am delighted that EEPA is publishing such an extremely relevant article in the field of educational evaluation, for as Professor Travers observed, it deals with one of the most basic problems in all of evaluation. However, evaluators have not yet recognized the problem as a problem. Present literature avoids discussion of two core problems in the area of evaluation: (1) What is a sound basis for classifying behavior? and (2) What is the nature of the value judgments through which such behaviors are judged to be worthy? In both of these areas, the evaluation specialist is virtually no more sophisticated than the ordinary citizen. This article demonstrates that a sophisticated classification system must result from thought at a sophisticated scientific level, and not from the kind of committee deliberations from which taxonomies of behavior have been derived.

26 citations


Journal ArticleDOI
TL;DR: This paper found that the amount of influence which a professional person has depends on variables that affect client perceptions of expertness, such as the title of the evaluator, the content of the evaluation, and how the program being evaluated was rated.
Abstract: Educators have become increasingly concerned with the utility of their evaluation activities and the effect that their conclusions and recommendations have on decision making (Davis & Salasin, 1975). This has led to a closer examination of decision-making processes in an evaluation context from a number of theoretical perspectives (Hawkins, Roffman, & Osbourne, 1978). Attribution theory (McGuire, 1969), for example, suggests that the amount of influence which a professional person has depends on variables that affect client perceptions of expertness. In a study applying this theory to an evaluation context, Braskamp, Brown, and Newman (1978) found that the title of the evaluator was related to evaluation audience ratings of evaluator objectivity. Evaluators referred to as \"researchers\" were rated higher on objectivity than those referred to as \"evaluators,\" and they in turn were rated higher than those referred to as \"content specialists.\" The sex of the evaluator has also been found to be significantly related to evaluator credibility (Newman, Brown, & Littman, 1979), with women evaluators rated lower by persons in a professional field differing from the content of the evaluation. Sex of the evaluator was also related to how the program being evaluated was rated. Thus, it appears that key evaluator characteristics are significantly related to how an evaluator's recommendations and conclusions are received. Communication theory (Fishbein & Ajzen, 1975) also provides a framework for examining the interaction of the content of the evaluation message and evaluation audience characteristics. Client position within an organization and degree of self-interest have been found to be related to acceptance of evaluation findings (Carter, 1971), and the amount of technical language and data in an evaluation report also affects evaluation audience reactions (Brown, Braskamp, & Newman, 1978; Thompson, Brown, & Furgason, 1979). Professional level and sex are other audience characteristics found to be related to client reactions (Newman, Brown, & Littman, 1979). An important audience characteristic which has not been studied is the effect of perceived need for evaluation. Strong (1978) posited that the amount of personal influence or power (P) of a professional is a function of the professional's perceived expertness as a resource per-

Journal ArticleDOI
TL;DR: In this paper, a comparative policy perspective, an organizational perspective, and an interactionist perspective were used to compare and analyze federal programs supporting educational change in the United States and Australia.
Abstract: I have been concerned with the issue of educational change and the problems of implementing educational innovations for some time now. Why I have consigned myself to this particular purgatory is often beyond me. The subject is an incredibly messy one which sooner or later touches on almost all aspects of schooling. The literature is voluminous and confusing. The educational reform euphoria of the 1960's and 1970's has now passed and nearly everyone in the community is aware that it is not as easy as it looked. Nevertheless, the concept of change go away; on the contrary, it makes process; it is what education is all about. Reduction in educational budgets will not make the problem of educational change go away; on the contrary it makes the problem even more critical. Rather than regard the situation as a crisis, however, we can regard it as an opportunity. Certainly the management of change during a period of decline is a more challenging problem than the management of change during a period of growth. Having begun on this optimistic note, I now want to consider several different perspectives on the study of educational innovations. Because of the complexity of educational change, I have attempted to view the change process using three different perspectives in my own work. These include a comparative policy perspective, an organizational perspective and an interactionist perspective. Within each perspective, I have focused primarily on the implementation stage of the change process. More specifically, using a comparative policy perspective, I have tried to compare and analyze federal programs supporting educational change in the United States and Australia. Then, using an organizational perspective, I have attempted to examine through survey research some of the contextual factors that may hinder or facilitate the implementation of change at the school level. I have also compared these with factors identified on the American scene. And finally, using an interactionist perspective and case studies, I have attempted to explore the different constructions of reality held by different relevant actors during the implementation of an innovation at the school level.

Journal ArticleDOI
TL;DR: There is a general agreement (Stufflebeam, Foley, Gephart, Guba, Hammond, Merriman, & Provus, 1971, p. 32) among theorists that evaluation is a process of providing information for decision-making, and that ultimately evaluation implies value judgments of worth as mentioned in this paper.
Abstract: Educational evaluation and policy analysis have been increasingly emphasized during the last two decades. An increasing emphasis on evaluation has been supported by an infusion of federal dollars, and the decision by various professional organizations to encourage their members to give more serious consideration to the assessment of educational endeavors. This support has had several noteworthy impacts. Emphasis on evaluation processes has stimulated reflection regarding which characteristics distinguish evaluation and policy analysis from each other and related processes. Consequently, as Scriven (1974) has noted, "as we look back on the very earliest attempts to grapple with or eliminate the distinction between evaluation and research . . ., we realize that we have come a long way toward understanding evaluation" (p. 4). Full consensus on these issues has certainly not emerged, but there does appear to be general agreement (Stufflebeam, Foley, Gephart, Guba, Hammond, Merriman, & Provus, 1971, p. 32) among theorists that evaluation is a process of providing information for decision-making, and that ultimately evaluation implies value judgments of worth. There also seems to be some recognition that evaluation and policy analysis, depending upon the models employed, can be very much the same things, although some evaluation models imply "preoccupation with existing programs" and some policy analysis models "usually compare existing and hypothetical alternative program solutions" (Wholey, Scanlon, Duffy, Fukumoto & Vogt, 1970, pp. 23-24). Glass (1967) has suggested that the importance of efforts to distinguish these various processes should not be minimized.

Journal ArticleDOI
TL;DR: This article performed a meta-analysis of research into the effect of class size on pupils' achievement and found that "other things equal, more is learned in smaller classes" is not supported by the evidence that they report.
Abstract: The attempt to perform a "metaanalysis" of research into the effect of class size on pupils' achievement is to be welcomed. Such an approach which integrates previous findings must be useful in trying to discern the wood from the trees in 80 years of apparently contradictory results. Nevertheless, we should be careful that analysis of studies with such diverse circumstances does not produce biased or shaky results. Glass and Smith's conclusion that "other things equal, more is learned in smaller classes" (p. 15), while possibly true, is not supported by the evidence that they report. First, the model on which this conclusion is based makes several dubious assumptions and summarizes their data only approximately. Second, its implications are much less straightforward than reported. (1) Glass and Smith, in their analysis, assume that their measure AS-L (a simple standardized comparison of mean achievement in a small and a large class)

Journal ArticleDOI
TL;DR: In this paper, it is argued that the programs that are accredited for providing professional training have little demonstrated relationship with the proficiencies that they are supposed to develop, and the certification or licensing of professionals, whether based on examinations or the receipt of training in accredited programs, is also being questioned as a procedure for assuring that professionals are qualified in their fields.
Abstract: During the last decade, many traditional aspects of our educational system and its preparation of professionals have been challenged (Smith, 1975). To a large degree, it is argued that the programs that are accredited for providing professional training have little demonstrated relationship with the proficiencies that they are supposed to develop (Jacobs, 1976). Moreover, the certification or licensing of professionals, whether based on examinations or the receipt of training in accredited programs, is also being questioned as a procedure for assuring that professionals are qualified in their fields.1

Journal ArticleDOI
TL;DR: The authors assesses the last quarter century of research on selected dimensions of school administration and educational policy and assess the utility of past research and to appraise the prospects for analytic efforts in the future.
Abstract: 1 his article assesses the last quarter century of research on selected dimensions of school administration and educational policy. In this undertaking, we attempt to determine the utility of past research and to appraise the prospects for analytic efforts in the future. Before embarking on this appraisal it is necessary to (1) define the research realm with which we are immediately concerned, and (2) specify the values by which we intend to judge success or failure. Then, we turn to a summary explanation and critical review of educational administration and policy-related research conducted between 1950 and 1975.

Journal ArticleDOI
TL;DR: The states have seemingly come of age in the governance of education since the mid-1960's as discussed by the authors, when powerful coalitions of educational interest groups have fallen apart; active new groups have joined the competition for limited resources; the courts have entered the fray, and governors and legislators have increased their participation: building professional staffs, overseeing budgets, taking stands on educational issues.
Abstract: The states have seemingly come of age in the governance of education. Since the mid-1960's, state governments have grappled with controversial problems of desegregation, student rights and unrest, school finance reform, aid to minority groups, fiscal crises and tax caps, declining enrollment and confidence, collective bargaining, and accountability and competency testing. During this time, powerful coalitions of educational interest groups have fallen apart; active new groups have joined the competition for limited resources; the courts have entered the fray, and governors and legislators have increased their participation: building professional staffs, overseeing budgets, taking stands on educational issues.1

Journal ArticleDOI
TL;DR: The major parties in these debates are the various representatives of elementary and secondary school teachers, on the one hand, and the faculty members of schools of education, in the other as discussed by the authors.
Abstract: Accreditation, the process by which an organization grants approval to an educational institution, is the central issue in several current debates among educators. The major parties in these debates are the various representatives of elementary and secondary school teachers, on the one hand, and the faculty members of schools of education, on the other. State education agencies also play an important part in these debates, but they have not been as much in the forefront as have the other two parties.

Journal ArticleDOI
TL;DR: The following paper by Professor Frederick Wirt challenges the "neoconservative" view that government social programs have not been effective as discussed by the authors, and produces a great deal of evidence that rebuts the two propositions of neoconservatives: "national policy efforts don't work and they are dangerous to other values in the society".
Abstract: Editor's Note: The following paper by Professor Frederick Wirt challenges the "neoconservative" view that government social programs have not been effective. He produces a great deal of evidence that rebuts the two propositions of neoconservatives: "national policy efforts don't work and they are dangerous to other values in the society." He focuses on education but touches on other social policy areas. Educational Evaluation and Policy Analysis is publishing this article with the hope that it will generate more papers on the important issues discussed by Wirt. The article is provocative and informative. We hope that our readers will respond either through data-based papers or in the Counterpoint section. Michael W. Kirst

Journal ArticleDOI
TL;DR: Stufflebeam as discussed by the authors was one of the earliest and most influential shapers of educational evaluation in the United States, and he became involved in this field by accident by working in test development and research at the Ohio State University during the 1960's.
Abstract: EEPA: Dr. Stufflebeam, it is generally conceded that you were one of the earliest and most influential shapers of educational evaluation in the United States. How did you become involved in this field? Stufflebeam: I became involved by accident. I happened to be working in test development and research at the Ohio State University during the 1960's, and when the Elementary and Secondary Education Act emerged with its evaluation requirements, there was desparate need for people to help school districts in responding to the evaluation requirement. I was asked by the State Department of Education in Michigan, for some reason unbeknownst to me, to address a state conference on evaluation, particularly with regard to the evaluation of Title I of the Elementary and Secondary Education Act (ESEA). About the same time, the Columbus, Ohio Public Schools came to our bureau and asked for assistance in responding to the ESEA evaluation requirement. And because of my response to those two requests, I became involved in the business of evaluating education and have never escaped from it since.

Journal ArticleDOI
TL;DR: In this article, the authors wrote out of concern over the development of our schools and pointed out that still rising costs force cutbacks in materials and services, and the quality of education is equivocal at a time where average per student costs have reached $1,436 in 1978, up from $938 in 1974.
Abstract: This paper was written out of concern over the development of our schools. Still rising costs force cutbacks in materials and services, and the quality of education is equivocal at a time where average per student costs have reached $1,436 in 1978, up from $938 in 1974. At the same time, achievement test scores are still declining. The cry for more resources will hardly result in significant funding increases, and I doubt whether further resource in-

Journal ArticleDOI
TL;DR: In this paper, an empirical study of the relationship between the level of involvement of local decision makers in accreditation evaluations and their perceptions of the utility of those evaluation efforts was conducted.
Abstract: ere is currently a general presumption among funding agencies, evaluation clients, and some evaluators themselves that evaluation is not as useful as it should be to local decision makers. In fact, very few empirical studies of the utility of evaluation have yet been conducted and the findings to date are mixed (Alkin, Daillak, & White, 1979; Patton, 1978). Further, it has been suggested that greater involvement of local groups in the evaluation process will increase the utility of evaluation to these individuals (cf. Wolf, 1979; National Institute of Education, Note 2). This paper reports on an empirical study of the relationships between the level of involvement of local decision makers in accreditation evaluations and their perceptions of the utility of those evaluation efforts. A study of local school board involvement in school accreditation evaluations was conducted to determine (a) how school boards are involved in such studies, (b) the relationships between the level of school board involvement and the utility of those evaluations to the

Journal ArticleDOI
TL;DR: In this paper, the authors focus on the problems related to using school district evaluations and explore some of the contextual characteristics of school districts that might impede or enhance the utilization of compensatory education.
Abstract: This paper focuses on the problems related to using school district evaluations. The main thrust is on increasing the utilization of programs such as compensatory education. The authors seek to explore some of the contextual characteristics of school districts that might impede or enhance utilization. Their approach is positive-exploring evaluator traits and actions that could improve the extent to which evaluations are used. This conversation, between one of the leading researchers on evaluation utilization and a highly respected evaluation practitioner, actually took place during a seminar at the UCLA Graduate School of Education.

Journal ArticleDOI
TL;DR: For example, teachers may consider themselves to be good teachers; students may praise them; and they may be reputed to be a good teacher throughout the college as discussed by the authors. But, unless the achievements of their pupils are evaluated, they can provide no solid evidence that they are in fact good teachers.
Abstract: lege faculties who are not in favor of evaluating students, that is giving tests and assigning grades. They may consider themselves to be good teachers; students may praise them; and they may be reputed to be good teachers throughout the college. But, unless the achievements of their pupils are evaluated, they can provide no solid evidence that they are in fact good teachers. They may instead be merely good entertainers, or skillful selfadvertisers, or good friends of the students. As teachers however, they may be quite ineffective. For the essential task of the teacher is to facilitate student learning. That is why schools and colleges and universities were established, and why taxes and tuition fees are collected to maintain them.

Journal ArticleDOI
TL;DR: In this article, the authors declare that the United States should provide financial assistance to local educational agencies serving areas with concentrations of children from low-income families to expand and improve their educational programs by various means which contribute particularly to meeting the special educational needs of educationally deprived children.
Abstract: In recognition of the special educational needs of children of low-income families and the impact that concentrations of low-income families have on the ability of local educational agencies to support adequate educational programs, the Congress hereby declares it to be the policy of the United States to provide financial assistance to local educational agencies serving areas with concentrations of children from low-income families to expand and improve their educational programs by various means which contribute particularly to meeting the special educational needs of educationally deprived children.


Journal ArticleDOI
TL;DR: In this paper, the authors look at the view of knowledge proposed by Polanyi (1962) and examine its implications for evaluative reporting, and raise the level of awareness of what is involved in gaining knowledge about programs and conveying those findings to relevant audiences.
Abstract: 1 he findings of evaluative inquiry are potentially useful only if they are accessible and understandable to decision makers. In this essay we look at the view of knowledge proposed by Polanyi (1962) and examine its implications for evaluative reporting. The intention is to raise the level of awareness of what is involved in gaining knowledge about programs and conveying those findings to relevant audiences, rather than to add to the literature on how to write technical reports.

Journal ArticleDOI
TL;DR: This paper focuses on the evaluation issues and problems (nagging, persistent questions) that are inherent in the mandatory evaluation of P.L. 94-142.
Abstract: In the past 5 years the educational community has witnessed the passage of a major piece of legislation with vast and far-ranging evaluation implications. In 1975, several years of legislative development culminated in the President's signing of Public Law 94-142 (P.L. 94-142). This legislation, The Education for All Handicapped Children Act, mandates a free and appropriate education for all handicapped children between the ages of 3-18 (to be extended to 3-21 by 1981). Included in the Act are requirements for the identification, location, and unbiased assessment of unserved handicapped children, and for complete individualized educational programs for all handicapped children in the least restrictive setting. The individualization of education is to be accomplished by the requirement of a formal, written statement of goals and objectives, and by criteria and procedures for evaluation of the progress of each handicapped child. This legislation provides a unique opportunity for evaluators to obtain information valuable for the assessment and improvement of programs for the handicapped. The emphases in it clearly represent a growing concern on the part of legislators for such information. This paper focuses on the evaluation issues and problems (nagging, persistent questions) that are inherent in the mandatory evaluation of P.L. 94-142. Basic premises and purposes will be examined first, followed by a consideration of evaluation issues.

Journal ArticleDOI
TL;DR: In this article, the authors found that achievement testing has not improved substantially over the past approximately 50 years, and that it is disturbing and frustrating to find that achievement tests have not improved.
Abstract: pended, it is disturbing and frustrating to find that achievement testing has not improved substantially over the past approximately 50 years. However, that assertion is made here and it should come as no surprise; in fact, it has been made repeatedly and consistently by reviewers in recent years. For example, Thorndike (1971), in reviewing developments in achievement testing from 1950 to 1970, concluded that

Journal ArticleDOI
TL;DR: A survey of federally funded projects to improve evaluation methods is presented in this paper, where the evaluation methods research currently being supported by federal agencies in the fields of health, criminal justice, and the military are reviewed.
Abstract: This paper reports on a survey of federally funded projects to improve evaluation methods. This survey reviewed the type of evaluation methods research currently being supported by federal agencies in the fields of health, criminal justice, and the military. Such a review creates a picture of the type of work currently being pursued in these areas and provides a basis for considering the types of new evaluation methods likely to become available to evaluation practitioners in these fields in the near future. While evaluation in education has been dominated in past years by measurement/research/objectives-based approaches, there has recently been an interest in alternative perspectives. For example, naturalistic (cf. Guba, 1978; Stake, 1975) and legal approaches (cf. Levine, 1974; Wolf, 1974, 1975) have been receiving attention as options for educational evaluation. An additional purpose for reviewing efforts to improve evaluation methods in health, criminal justice, and the military, therefore, is to identify other new approaches that might have utility in educational evaluation. Hence, a search was conducted for federally funded projects devoted to the improvement of evaluation methods. A representative sample of the projects funded by various federal agencies in the three areas of interest was obtained through a computerized search of the Smithsonian Science Information Exchange (a data bank of federaland state-funded projects), plus the examination of federal planning and project documents. Only evaluation methods research projects active during the 1978-1979 period were selected for study. The evaluation method projects were identified based on project abstracts and were selected according to the agencies' own definitions of evaluation methods research. Because of the large number of federally funded projects which are spread across our vast governmental bureaucracy, some relevant projects may have been overlooked. Therefore, the following analysis is based on what was designed to be a representative sample rather than an exhaustive compil tion.