scispace - formally typeset
Open AccessJournal ArticleDOI

Eliciting Expert Knowledge in Conservation Science

Reads0
Chats0
TLDR
Four aspects of an expert elicitation exercise are suggested to be examined to determine its comprehensiveness and effectiveness: study design and context, elicit design, elicitation method, and elicitation output.
Abstract
Expert knowledge is used widely in the science and practice of conservation because of the complexity of problems, relative lack of data, and the imminent nature of many conservation decisions. Expert knowledge is substantive information on a particular topic that is not widely known by others. An expert is someone who holds this knowledge and who is often deferred to in its interpretation. We refer to predictions by experts of what may happen in a particular context as expert judgments. In general, an expert-elicitation approach consists of five steps: deciding how information will be used, determining what to elicit, designing the elicitation process, performing the elicitation, and translating the elicited information into quantitative statements that can be used in a model or directly to make decisions. This last step is known as encoding. Some of the considerations in eliciting expert knowledge include determining how to work with multiple experts and how to combine multiple judgments, minimizing bias in the elicited information, and verifying the accuracy of expert information. We highlight structured elicitation techniques that, if adopted, will improve the accuracy and information content of expert judgment and ensure uncertainty is captured accurately. We suggest four aspects of an expert elicitation exercise be examined to determine its comprehensiveness and effectiveness: study design and context, elicitation design, elicitation method, and elicitation output. Just as the reliability of empirical data depends on the rigor with which it was acquired so too does that of expert knowledge.

read more

Content maybe subject to copyright    Report

1
Eliciting Expert Knowledge in Conservation Science
Tara G. Martin
1,2*
, Mark A. Burgman
3
, Fiona Fidler
3
, Petra M. Kuhnert
4
, Samantha Low-Choy
5,6
,
Marissa McBride
3
and Kerrie Mengersen
6
1
CSIRO Ecosystem Sciences, Ecoscience Precinct, GPO Box 2583 Brisbane, Queensland 4001
Australia; Tara.martin@csiro.au
2
ARC Centre of Excellence for Environmental Decisions, University of Queensland, Queensland
4072 Australia
3
Australian Centre of Excellence for Risk Analysis, School of Botany, University of Melbourne,
Parkville, Victoria 3010 Australia
4
CSIRO Mathematics, Informatics and Statistics, Private Bag 2, Glen Osmond, SA 5064
Australia
5
Cooperative Research Centre in National Plant Biosecurity, Canberra Australia
6
Faculty of Science and Technology, Queensland University of Technology, GPO Box 2434
Brisbane, Queensland 4001 Australia
*To whom correspondence should be addressed: Tara.Martin@csiro.au; Ph + 61 7 3833 5727
Word count: 7537
Running head: Elicitation of expert knowledge
Keywords: bias, decision making, expert judgement, expert opinion, elicitation, overconfidence,
Bayesian priors

2
Abstract
Expert knowledge is used widely in the science and practice of conservation because of the
relative lack of data and the imminent nature of many conservation decisions. Expert knowledge
is substantive information on a particular topic that is not widely known by others. An expert is
someone who holds this knowledge and who should be deferred to in its interpretation. When
experts use their knowledge to predict what may happen in a particular context, we refer to these
predictions as expert judgements, since what will happen is not known for certain. In general an
expert-elicitation approach for use in conservation science consists of five steps: decide how
information will be used, determining what to elicit, designing the elicitation process, performing
the elicitation, and translating the elicited information into quantitative statements that can be
used in a model or decision directly. This last step is known as encoding. Some of the
considerations in eliciting expert knowledge include determining how to work with multiple
experts and combine multiple judgements, minimizing bias in the elicited information, and
verifying the accuracy of expert information. We highlight structured elicitation techniques, that
if adopted, will improve the accuracy and information content of expert judgement and ensure
uncertainty is captured appropriately. Four criteria; study design and context, elicitation design,
elicitation method and elicitation output, can be used to assess the comprehensiveness and
effectiveness of an elicitation exercise. Just as the reliability of empirical data depends on the
rigor with which it was acquired so too does that of expert knowledge.
keep revised abstract to 1 page

3

4
Introduction
The growing use of expert knowledge in conservation science is driven by the need to
characterize dynamic, complex systems, limited resources to collect new empirical data, and the
urgency of conservation decisions (Sutherland 2006; Kuhnert et al. 2010). The utility of expert
knowledge depends on the scientific rigor with which it is acquired and its accuracy. Just as
observational data and the methods used to collect it are subject to scrutiny, so too should expert
knowledge be scrutinized to ensure that uncertainty is quantified and bias in the elicited
information is minimized (O'Hagan et al. 2006).
In this review, we defined what expert knowledge is and who qualifies as an expert. We
examined how expert knowledge is being used to inform conservation science and practice. We
outlined an elicitation approach that consists of five steps; deciding how information will be used,
determining what to elicit, designing the elicitation process, performing the elicitation, and
encoding the elicited information for use in a model or to inform a decision directly. We focussed
on the elicitation of quantities such as population sizes, the likelihood of extinction of a
population or the prevalence of a species or pest. We discussed ways of minimising bias,
combining multiple judgements, dealing with uncertainty, and increasing the accuracy of elicited
information. Finally we outlined criteria for assessing how comprehensive and informative an
elicitation exercise has been.
Definition of expert knowledge
Expert knowledge is substantive information on a particular topic that is not widely known by
others. An expert is generally considered someone who holds information about a given topic and
who should be deferred to in its interpretation (Barley & Kunda 2006). This knowledge may be

5
the result of training, research, and skills, but could also be based on personal experience
(Burgman et al. 2011a). When experts use their knowledge to predict what may happen in a
particular context, we refer to these predictions as expert judgements, since what will happen is
not known for certain. Experts exist, are unequally distributed among the human population, and
are not created only through formal education or professional experience (Evans 2008). There are
different types of expertise: substantive, which reflects the expert’s knowledge of their domain;
normative, which is the expert’s ability to accurately and clearly communicate their judgements
in a particular format, such as probabilities; and adaptive, which describes the degree to which
experts are able to extrapolate or adapt to new circumstances (McBride & Burgman 2011).
These types of expertise may be unrelated, but they are all integral to the effective use of expert
information.
The quality of expert judgements is reflected in the calibration and informativeness of the
judgements (Cooke 1991; O'Hagan et al. 2006). Calibration of a judgement indicates how closely
a judgement corresponds to reality (e.g., the amount of agreement between an expert’s judgement
in the form of, for example, probabilities and what is observed in reality) (O'Hagan et al. 2006).
The informativeness of an expert’s judgement is reflected in the precision and confidence (e.g.,
uncertainty of an estimate). Calibration of judgements occurs through observation of the
outcomes of predictions or through formal evaluation of an expert’s knowledge (tests) or use of
knowledge in scenario analyses (Cooke 1991; Burgman et al. 2011b).
Value of expert knowledge
Data on many conservation problems are typically scarce; nevertheless, management decisions
must be made (Cook et al. 2009). Expert judgements can provide information about model
parameters and help characterize uncertainty in models, the intent of which often is to confront

Citations
More filters
Journal ArticleDOI

The magnitude of global marine species diversity

Ward Appeltans, +125 more
- 04 Dec 2012 - 
TL;DR: The first register of the marine species of the world is compiled and it is estimated that between one-third and two-thirds of marine species may be undescribed, and previous estimates of there being well over one million marine species appear highly unlikely.
Journal ArticleDOI

Assessing species' vulnerability to climate change

TL;DR: In this article, three main approaches used to derive these currencies (correlative, mechanistic and trait-based) and their associated data requirements, spatial and temporal scales of application and modelling methods are described.
Journal ArticleDOI

An overview of methods to evaluate uncertainty of deterministic models in decision support

TL;DR: This work reviews various methods that have been or could be applied to evaluate the uncertainty related to deterministic models' outputs, and covers expert judgement, model emulation, sensitivity analysis, temporal and spatial variability in the model outputs, use of multiple models, and statistical approaches.
Journal ArticleDOI

Conserving mobile species

TL;DR: For example, this paper showed that even simple and predictable linkages among sites caused by to-and-fro migration can make migratory species especially vulnerable to habitat loss, and substantially affect the results of conservation prioritizations.
References
More filters
Book

Judgment Under Uncertainty: Heuristics and Biases

TL;DR: The authors described three heuristics that are employed in making judgements under uncertainty: representativeness, availability of instances or scenarios, and adjustment from an anchor, which is usually employed in numerical prediction when a relevant value is available.
Journal ArticleDOI

Availability: A heuristic for judging frequency and probability

TL;DR: A judgmental heuristic in which a person evaluates the frequency of classes or the probability of events by availability, i.e., by the ease with which relevant instances come to mind, is explored.
Journal ArticleDOI

Bayesian data analysis.

TL;DR: A fatal flaw of NHST is reviewed and some benefits of Bayesian data analysis are introduced and illustrative examples of multiple comparisons in Bayesian analysis of variance and Bayesian approaches to statistical power are presented.
Journal ArticleDOI

Biased Assimilation and Attitude Polarization: The Effects of Prior Theories on Subsequently Considered Evidence

TL;DR: In this paper, subjects supporting and opposing capital punishment were exposed to two purported studies, one seemingly confirming and one seemingly disconfirming their existing beliefs about the deterrent efficacy of the death penalty.
Related Papers (5)