Getting what you pay for: the challenge of measuring success in conservation
Citations
555 citations
Cites background from "Getting what you pay for: the chall..."
...…knowledge required and the costs associated with many quantitative and longitudinal monitoring and evaluation protocols may hinder the ability of Conservation Biology Volume 30, No. 3, 2016 managers in many contexts to collect, analyze, and apply the results in a meaningful fashion (Jones 2012)....
[...]
51 citations
Cites background from "Getting what you pay for: the chall..."
...The interpretation of river restoration success can vary between stakeholders and sectors, particularly as they will have different targets and indicators of success (Howe & MilnerGulland, 2012; Jones, 2012), and this can be somewhat problematic....
[...]
34 citations
Cites background from "Getting what you pay for: the chall..."
...Much has been learnt about the failure of ICDPs, but mistakes continue to be repeated, and we are certainly not the first to call for rigorous, systematic monitoring in conservation (e.g. Blom et al., 2010; Bottrill et al., 2011; Jones, 2012)....
[...]
...Jones (2012) suggests that for output measures to be more valuable for assessing project success, the linkages between outputs and outcomes, both in project proposals and reports, should be explicitly stated alongside the evidence upon which the assumption is based (Jones, 2012)....
[...]
...In order to increase funding for conservation activities and to encourage donor confidence in conservation investments, there needs to be considerably more attention devoted to developing and applying robust and cost-effective approaches for evaluating success (Jones, 2012)....
[...]
...As project outcomes may not be achieved over the small timescale of the project, indices based on outputs will always be needed (Jones, 2012)....
[...]
...Jones (2012) suggests that for output measures to be more valuable for assessing project success, the linkages between outputs and outcomes, both in project proposals and reports, should be explicitly stated alongside the evidence upon which the assumption is based (Jones, 2012)....
[...]
24 citations
20 citations
Cites background from "Getting what you pay for: the chall..."
...Quantifying success in biological terms in conservation projects can be difficult, as the time frames involved are often longer than many projects, and they can be cost dependent, putting them out of reach of smaller projects [22-27]....
[...]
References
1,204 citations
"Getting what you pay for: the chall..." refers background in this paper
...It has been suggested that environmental policy, including biodiversity conservation, lacks behind other policy fields such as criminal rehabilitation and health in terms of the quality of project evaluation (Ferraro & Pattanayak, 2006)....
[...]
240 citations
"Getting what you pay for: the chall..." refers background in this paper
...…that has received millions of dollars of funding from global donors, found very little evidence that CFM delivers the claimed global environmental benefits such as biodiversity conservation, and almost no robust evidence concerning the delivery of local welfare benefits (Bowler et al., 2012)....
[...]
...Bowler et al. (2012) do not conclude that CFM is not effective at delivering these benefits, but that the evidence that will allow donors to be confident in the efficacy of their investment simply has not been collected....
[...]
182 citations
170 citations
"Getting what you pay for: the chall..." refers background in this paper
..., 2004; Brooks et al., 2006). In their paper, Howe & Milner-Gulland (2012) look at the question of what indices are appropriate for evaluating success in conservation....
[...]
..., 2004; Brooks et al., 2006). In their paper, Howe & Milner-Gulland (2012) look at the question of what indices are appropriate for evaluating success in conservation. Howe & Milner-Gulland (2012) follow others by distinguishing outputs (the amount of something delivered by a project, e....
[...]
...Conservation projects have widely been criticized in the past for poor evaluation (Saterson et al., 2004; Brooks et al., 2006)....
[...]
..., 2004; Brooks et al., 2006). In their paper, Howe & Milner-Gulland (2012) look at the question of what indices are appropriate for evaluating success in conservation. Howe & Milner-Gulland (2012) follow others by distinguishing outputs (the amount of something delivered by a project, e.g. number of workshops held, papers published or posters distributed) and outcomes (the long-term consequences of the project, e.g. change in population size of a target species). Outcomes are what the project ultimately aims to deliver, but they can be very costly to measure. A recent study of the costs of monitoring presence or absence of a variety of species of conservation concern in the dry forests of Madagascar illustrates the challenge of monitoring outcomes directly. Sommerville, Milner-Gulland & Jones (2011) found that monitoring that could robustly detect change over time would be unrealistically costly for the vast majority of species as would cost more than the budget for the entire intervention. The UK government launched its Darwin Initiative at the Rio summit in 1992. Since then, it has invested £88 million in biodiversity conservation projects in 154 countries (DEFRA, 2012). This fantastic programme provided Howe & Milner-Gulland (2012) with an unrivalled opportunity to investigate how much agreement there was in rankings of project success as evaluated using different indices (one based on reported outputs, and two based on subjective scoring of information about outcomes) and also, which explanatory variables best predicted success as defined by the different indices. Their finding that ranking of projects using the outputs-based indicator was well correlated with the ranking from the subjective outcomes measure is interesting and worthy of further exploration. However, as the authors themselves note, there is no quantitative, independent data on outcomes available against which to measure the success of the various indices. Because outcomes are so difficult to measure directly, and may also not be achieved over the small timescale of a funded project, indices based on outputs will always be needed. Underlying this approach is an assumption that there is a mechanism that links delivery of the outputs with delivery of outcomes. This is often not explicit. If assumptions as to linkages between outputs and outcomes were more explicitly spelt out, both in project proposals and reports, alongside the evidence upon which the assumption is based, output measures would become more valuable for assessing project success. Howe & Milner-Gulland (2012) also investigated the internal consistency (how different assessors would score an individual project using the same index) of two of the possible indices....
[...]
..., 2004; Brooks et al., 2006). In their paper, Howe & Milner-Gulland (2012) look at the question of what indices are appropriate for evaluating success in conservation. Howe & Milner-Gulland (2012) follow others by distinguishing outputs (the amount of something delivered by a project, e.g. number of workshops held, papers published or posters distributed) and outcomes (the long-term consequences of the project, e.g. change in population size of a target species). Outcomes are what the project ultimately aims to deliver, but they can be very costly to measure. A recent study of the costs of monitoring presence or absence of a variety of species of conservation concern in the dry forests of Madagascar illustrates the challenge of monitoring outcomes directly. Sommerville, Milner-Gulland & Jones (2011) found that monitoring that could robustly detect change over time would be unrealistically costly for the vast majority of species as would cost more than the budget for the entire intervention. The UK government launched its Darwin Initiative at the Rio summit in 1992. Since then, it has invested £88 million in biodiversity conservation projects in 154 countries (DEFRA, 2012). This fantastic programme provided Howe & Milner-Gulland (2012) with an unrivalled opportunity to investigate how much agreement there was in rankings of project success as evaluated using different indices (one based on reported outputs, and two based on subjective scoring of information about outcomes) and also, which explanatory variables best predicted success as defined by the different indices....
[...]
106 citations
"Getting what you pay for: the chall..." refers background in this paper
...Conservation projects have widely been criticized in the past for poor evaluation (Saterson et al., 2004; Brooks et al., 2006)....
[...]