scispace - formally typeset
Search or ask a question

Showing papers in "Evaluation of Journal of Australasia in 2003"


Journal ArticleDOI
TL;DR: Patton as discussed by the authors suggested that if one had to choose between implementation information and outcomes information because of limited evaluation resoures, there are many instances in which implementation information would be of greater value.
Abstract: ‘In Utilization-Focused Evaluation (Patton, 1978) I suggested that if one had to choose between implementation information and outcomes information because of limited evaluation resoures, there are many instances in which implementation information would be of greater value. A decision maker can use implementation information to make sure that a policy is being put into operation according to design – or to test the feasibility of the policy. Unless one knows that a program is operating according to design, there may be little reason to expect it to produce the desired outcomes. Furthermore, until the program is implemented and a ‘treatment’ is believed to be in operation, there may be little reason even to bother evaluating outcomes. Where outcomes are evaluated without knowledge of implementation, the resuts seldom provide a direction for action because the decision maker lacks information about what produced the observed outcomes (or lack of outcomes). ... It is important to study and evaluate program implementation in order to understand how and why programs deviate from initial plans and expectations. Such deviations are quite common and natural ...’ (Patton, 1980, p 69; 1990, p. 105; Patton, 2002, p. 161)

12,369 citations



Journal ArticleDOI
TL;DR: In this article, a discussion about decision-making and the kinds of knowledge that could and should be used for this purpose within the workplace has become prominent in current organisational literature.
Abstract: Discussions about decision-making and the kinds of knowledge that could and should be used for this purpose within the workplace have become prominent in current organisational literature. These is...

34 citations


Journal ArticleDOI
TL;DR: In this paper, the authors highlighted the need for the development of a new relationship between the Indigenous community and the evaluation profession, and the need to renew the focus on ethical evaluation practices in intercultural contexts.
Abstract: This keynote address to the Australasian Evaluation Society (AES) 2003 international conference examines the ethics involved in undertaking evaluations that concern Indigenous communities, the implications for evaluators, and the need to renew the focus on ethical evaluation practices in intercultural contexts. Intercultural refers to the ‘meeting of two distinct cultures through processes and interactions which retain the distinctive integrity and difference of both cultures and which may involve a blending of elements of both cultures but never the domination of one over another’. The author also highlights the need for the development of a new relationship between the Indigenous community and the evaluation profession.

34 citations


Journal ArticleDOI
Rick Cummings1
TL;DR: Christina A Christie as mentioned in this paper conducted a comparative study of the reported practices of evaluation theorists and assessed and described within this framework the real-world practices of people in the field, and concluded that evaluation practitioners do not align themselves with particular theories, rather they use notions from a range of theories to assist them to conduct a particular evaluation study.
Abstract: This issue of New Directions is an unusual volume in that it is focussed entirely on one research study but includes comments on this study by a range of eminent evaluators. The study, conducted by Christina A Christie of the Claremont Graduate University, aimed ‘to develop an empirically derived comparative framework of the reported practices of evaluation theorists and to assess and describe within this framework the real-world practices of people in the field’ (p. 3). In other words, Christie wanted to take a systematic look at the differences between the practice of eight well-known evaluation theorists and the extent to which a sample of evaluation practitioners used these theories and similar practices in conducting their studies. The issue commences with Christie outlining her study and the findings. In brief, she found that the eight theorists varied considerably in their practices on the two general dimensions she identified, scope of stakeholder involvement and method proclivity (the extent to which the study is guided by a prescribed technical approach). The variability was greater in the former than the latter. Likewise, there was considerable variability among the 138 internal and external evaluation practitioners included in the study, although this group varied equally along both dimensions. Christie concludes from the study that the practices of the evaluation theorists, all highly trained and experienced, are more in line with their theoretical perspective than is the case for the less experienced and trained evaluation practitioners. Additionally, the practices of internal and external evaluation practitioners are more like each other than they are like those of the evaluation theorists. A particularly interesting finding is that internal evaluators were less likely than external evaluators to involve stakeholders in their studies. Christie explains this as a result of the fact that internal evaluators are often stakeholders in the study, but it may also be that internal evaluators are generally less experienced and trained in evaluation, and hence may be less confident about involving stakeholders or aware of the need to do so. Overall, Christie concludes that evaluation practitioners do not align themselves with particular theories, rather they use notions from a range of theories to assist them to conduct a particular evaluation study. This may feel familiar to many readers – it certainly describes much of my work! The bulk of the issue comprises six short articles by eminent evaluators responding to Christie’s study, including two of the evaluation theorists, Ernest House and David Fetterman, included in her study. The responses are quite mixed in purpose, content, style and conclusions. It is particularly interesting to observe how a group of eminent evaluators analyses a single research study, and many readers will find it very instructive in what to look for when assessing a research study. For example, all the eminent evaluators supported Christie’s efforts to conduct rigorous empirical research, although several authors were critical of some of her methods such as the reliance on selfreport data in the study. However, the key observation is that all research has limitations, because choices must be made throughout a research study based on a number of factors, reviewed by: Rick Cummings Teaching and Learning Centre, Murdoch University Perth, Western Australia

23 citations


Journal ArticleDOI
TL;DR: In this article, the authors deal with the European experience with evaluation, and a few points are necessary to frame what they are going to talk about, but they do not discuss the evaluation process.
Abstract: My talk today deals with the European experience with evaluation. A few points are necessary to frame what I am going to talk about.

19 citations


Journal ArticleDOI
TL;DR: The historical and current status of Indigenous people needs careful consideration in designing evaluations of Indigenous-specific programs as mentioned in this paper, and the evaluator operates in a context where the historical and contemporary status of people needs to be considered.
Abstract: The historical and current status of Indigenous people needs careful consideration in designing evaluations of Indigenous-specific programs. In Australia, the evaluator operates in a context arisin...

13 citations


Journal ArticleDOI
TL;DR: In this paper, the state of play in outcome-oriented monitoring and evaluation in New Zealand in the light of recent Australian experience is reviewed, given issues arising from a "big bang" approach to ev...
Abstract: This paper reviews the state of play in outcome-oriented monitoring and evaluation in New Zealand in the light of recent Australian experience. Given issues arising from a ‘big bang’ approach to ev...

11 citations


Journal ArticleDOI
TL;DR: The year 2003 marked the 21st year of the series of National Evaluation Conferences (NECs) and international conferences, as well as the sixteenth year of AES as discussed by the authors.
Abstract: The year 2003 marks the 21st year of the series of National Evaluation Conferences (NECs) and international conferences, as well as the sixteenth year of the AES. This paper is not intended as a full history, nor as a dossier of the AES per se; that is for another time and place. Rather, it seeks both to reflect on whether there is a distinctive flavour of public sector program evaluation in the antipodes, and to give an overview of how evaluation developed in Australia, and to a lesser extent New Zealand. This will require the opening up and examination of the historical roots of program evaluation in Australia and of some aspects of current practice.

8 citations


Journal ArticleDOI
TL;DR: In this paper, the use of program logic against the background of the politicised environment of evaluations is examined, where the development of a program logic for the purpose of focusing an evaluation can be a highly politicised process, given that it requires sign-off by the "authorising environment".
Abstract: This paper examines the use of program logic against the background of the politicised environment of evaluations. Its central argument is that the development of a program logic for the purpose of focusing an evaluation can be a highly politicised process, given that it requires sign-off by the 'authorising environment'. We commence with a brief discussion of how politics surface within organisations because evaluation planning is typically conducted within these settings. A model of change management is then introduced to highlight how political forces both hostile to and supportive of the evaluation process can surface when evaluations are being planned. We next consider two scenarios, drawn from the evaluation of a program to improve the competence and confidence of professionals working with people at risk of self-harm and suicide. These scenarios are used to highlight a number of important points about the politics of focusing an evaluation. The paper concludes by identifying some of the dilemmas that evaluation practitioners may need to work through in focusing an evaluation in a highly politicised environment, as well as how these might be addressed using program logic.

8 citations


Journal ArticleDOI
TL;DR: The first task of the project was to develop a framework linking community needs, community capacity, wellbeing and CPV interventions, to guide selection of social indicators and then compile a database of data from various sources as discussed by the authors.
Abstract: Victoria University in conjunction with Crime Prevention Victoria (CPV) received an ARC grant to investigate the relationships between crime prevention and community governance The first task of the project was to develop a framework linking community needs, community capacity, wellbeing and CPV interventions, to guide selection of social indicators and then compile a database of data from various sources Among the difficulties inherent in developing social indicators are: selecting a framework to guide the development and analysis of the indicators; the difficulty of obtaining a reliable acrossgovernment comprehensive database that would be continuously updated; the different contexts, policy goals and programs that indicators could serve; the significance of different definitions and contexts; applying appropriate criteria to guide the selection of the indicators; and the diversity of views about how indicators should or could be used The purpose of this paper is to describe how these issues are addressed in this project, the theoretical framework that guides the selection of data from the database and how some of these difficulties are addressed

Journal ArticleDOI
TL;DR: The number and scope of corporate collapses in recent times clearly illustrate that corporate accountability practices are failing to match the rhetoric of managers responsible for business probity as mentioned in this paper, which is a concern.
Abstract: The number and scope of corporate collapses in recent times clearly illustrate that corporate accountability practices are failing to match the rhetoric of managers responsible for business probity...

Journal ArticleDOI
TL;DR: A teenager approaching the magical twenty years might be expected to ask: ‘Where have I come from? Why am I different?’ With over 21 years of National and International Evaluation Conferences be...
Abstract: A teenager, approaching the magical twenty years might be expected to ask: ‘Where have I come from? Why am I different?’ With over 21 years of National (and International) Evaluation Conferences be...

Journal ArticleDOI
TL;DR: In this paper, the authors measure the extent to which a participant's mental model of leadership changes as a result of education, training and experience, and propose that a training course needs to enhance the leader's mental models of leadership if it is to improve his or her leadership performance.
Abstract: Effective evaluation provides a vehicle for ensuring that organisations deliver the training that enables managers to grow and gain mastery in their various roles. However, evaluating the relevance of training for developing generic skills such as leadership presents an enduring difficulty for HR and training specialists. This is because it is relatively easy to obtain feedback on the participant’s reaction to training, but more difficult to determine whether they have improved their understanding of leadership, or whether there is any subsequent benefit to the organisation as a whole. Trainers might typically evaluate their courses by eliciting participant responses to course content and processes, and perhaps by examining participant retention of specific skills and strategies. These approaches are examples of Level 1 and 2 evaluation (Kirkpatrick 1998). Measuring individual development (Level 3) and organisational change (Level 4) are also necessary to gain a more complete measure of the effectiveness of a course, but they are more difficult. A number of disparate, intervening variables not associated with the course may account for an increase in participants’ performance, and isolating these variables presents a challenge. This is particularly true when allowing for the impact of experience in developing leaders. The overall purpose of our research was to measure the extent to which a participant’s ‘mental model’ of leadership changes as a result of leadership development interventions such as education, training and experience. Argyris and Schön (1974) define a mental model as the individual’s personal ‘theory in use’ about the world that underpins their behaviour. They may also be called ‘cognitive maps’, ‘schemas’ or ‘mental constructs’. The mental model determines the selection, interpretation, simplification and integration of information from the surrounding world. The individual relies on the mental model to make sense of the world and to formulate plans for influencing outcomes. Through experiences, a manager develops a personal mental model of leadership and it is this cognitive understanding that determines the observable leadership behaviour. We propose that a training course needs to enhance the leader’s mental model of leadership if it is to improve his or her leadership performance. The purpose of this paper is to describe an aspect of our research that entails an innovative method for quantifying Level 3 changes in individuals undergoing leadership development. Eric Stevenson