1
Eliciting Expert Knowledge in Conservation Science
Tara G. Martin
1,2*
, Mark A. Burgman
3
, Fiona Fidler
3
, Petra M. Kuhnert
4
, Samantha Low-Choy
5,6
,
Marissa McBride
3
and Kerrie Mengersen
6
1
CSIRO Ecosystem Sciences, Ecoscience Precinct, GPO Box 2583 Brisbane, Queensland 4001
Australia; Tara.martin@csiro.au
2
ARC Centre of Excellence for Environmental Decisions, University of Queensland, Queensland
4072 Australia
3
Australian Centre of Excellence for Risk Analysis, School of Botany, University of Melbourne,
Parkville, Victoria 3010 Australia
4
CSIRO Mathematics, Informatics and Statistics, Private Bag 2, Glen Osmond, SA 5064
Australia
5
Cooperative Research Centre in National Plant Biosecurity, Canberra Australia
6
Faculty of Science and Technology, Queensland University of Technology, GPO Box 2434
Brisbane, Queensland 4001 Australia
*To whom correspondence should be addressed: Tara.Martin@csiro.au; Ph + 61 7 3833 5727
Word count: 7537
Running head: Elicitation of expert knowledge
Keywords: bias, decision making, expert judgement, expert opinion, elicitation, overconfidence,
Bayesian priors
2
Abstract
Expert knowledge is used widely in the science and practice of conservation because of the
relative lack of data and the imminent nature of many conservation decisions. Expert knowledge
is substantive information on a particular topic that is not widely known by others. An expert is
someone who holds this knowledge and who should be deferred to in its interpretation. When
experts use their knowledge to predict what may happen in a particular context, we refer to these
predictions as expert judgements, since what will happen is not known for certain. In general an
expert-elicitation approach for use in conservation science consists of five steps: decide how
information will be used, determining what to elicit, designing the elicitation process, performing
the elicitation, and translating the elicited information into quantitative statements that can be
used in a model or decision directly. This last step is known as encoding. Some of the
considerations in eliciting expert knowledge include determining how to work with multiple
experts and combine multiple judgements, minimizing bias in the elicited information, and
verifying the accuracy of expert information. We highlight structured elicitation techniques, that
if adopted, will improve the accuracy and information content of expert judgement and ensure
uncertainty is captured appropriately. Four criteria; study design and context, elicitation design,
elicitation method and elicitation output, can be used to assess the comprehensiveness and
effectiveness of an elicitation exercise. Just as the reliability of empirical data depends on the
rigor with which it was acquired so too does that of expert knowledge.
keep revised abstract to 1 page
3
4
Introduction
The growing use of expert knowledge in conservation science is driven by the need to
characterize dynamic, complex systems, limited resources to collect new empirical data, and the
urgency of conservation decisions (Sutherland 2006; Kuhnert et al. 2010). The utility of expert
knowledge depends on the scientific rigor with which it is acquired and its accuracy. Just as
observational data and the methods used to collect it are subject to scrutiny, so too should expert
knowledge be scrutinized to ensure that uncertainty is quantified and bias in the elicited
information is minimized (O'Hagan et al. 2006).
In this review, we defined what expert knowledge is and who qualifies as an expert. We
examined how expert knowledge is being used to inform conservation science and practice. We
outlined an elicitation approach that consists of five steps; deciding how information will be used,
determining what to elicit, designing the elicitation process, performing the elicitation, and
encoding the elicited information for use in a model or to inform a decision directly. We focussed
on the elicitation of quantities such as population sizes, the likelihood of extinction of a
population or the prevalence of a species or pest. We discussed ways of minimising bias,
combining multiple judgements, dealing with uncertainty, and increasing the accuracy of elicited
information. Finally we outlined criteria for assessing how comprehensive and informative an
elicitation exercise has been.
Definition of expert knowledge
Expert knowledge is substantive information on a particular topic that is not widely known by
others. An expert is generally considered someone who holds information about a given topic and
who should be deferred to in its interpretation (Barley & Kunda 2006). This knowledge may be
5
the result of training, research, and skills, but could also be based on personal experience
(Burgman et al. 2011a). When experts use their knowledge to predict what may happen in a
particular context, we refer to these predictions as expert judgements, since what will happen is
not known for certain. Experts exist, are unequally distributed among the human population, and
are not created only through formal education or professional experience (Evans 2008). There are
different types of expertise: substantive, which reflects the expert’s knowledge of their domain;
normative, which is the expert’s ability to accurately and clearly communicate their judgements
in a particular format, such as probabilities; and adaptive, which describes the degree to which
experts are able to extrapolate or adapt to new circumstances (McBride & Burgman 2011).
These types of expertise may be unrelated, but they are all integral to the effective use of expert
information.
The quality of expert judgements is reflected in the calibration and informativeness of the
judgements (Cooke 1991; O'Hagan et al. 2006). Calibration of a judgement indicates how closely
a judgement corresponds to reality (e.g., the amount of agreement between an expert’s judgement
in the form of, for example, probabilities and what is observed in reality) (O'Hagan et al. 2006).
The informativeness of an expert’s judgement is reflected in the precision and confidence (e.g.,
uncertainty of an estimate). Calibration of judgements occurs through observation of the
outcomes of predictions or through formal evaluation of an expert’s knowledge (tests) or use of
knowledge in scenario analyses (Cooke 1991; Burgman et al. 2011b).
Value of expert knowledge
Data on many conservation problems are typically scarce; nevertheless, management decisions
must be made (Cook et al. 2009). Expert judgements can provide information about model
parameters and help characterize uncertainty in models, the intent of which often is to confront