scispace - formally typeset
Search or ask a question

Showing papers by "Simon French published in 2021"


Posted Content
TL;DR: Human reliability analysis (HRA) as discussed by the authors has been used to assess the probability of various operator errors, be they errors of omission or commission, and the consequences of their failure were examined by tracing the effects through a fault tree.
Abstract: Few systems operate completely independent of humans. Thus any study of system risk or reliability requires analysis of the potential for failure arising from human activities in operating and managing this. Human reliability analysis (HRA) grew up in the 1960s with the intention of modelling the likelihood and consequences of human error. Initially, it treated the humans as any other component in the system. They could fail and the consequences of their failure were examined by tracing the effects through a fault tree. Thus to conduct a HRA one had to assess the probability of various operator errors, be they errors of omission or commission. First generation HRA may have used some sophistication in accomplishing this, but in essence that is all they did. Over the years, methods have been developed that recognise human potential to recover from a failure, on the one hand, and the effects of stress and organisational culture on the likelihood of possible errors, on the other. But no method has yet been developed which incorporates all our understanding of individual, team and organisational behaviour into overall assessments of system risk or reliability.

26 citations


Journal ArticleDOI
TL;DR: The elicitation processes involved are discussed, arguing that the current literature has developed if not in silos, then in pockets of activity that do not reflect the more joined up processes that often take place in practice.
Abstract: At the outset of an analysis, there is a need to interact with the problem-owners to understand their perspectives on the issues. This understanding leads to the construction of one or more models ...

8 citations


Journal ArticleDOI
TL;DR: A Bayesian framework for structured expert judgement that can be utilised as an alternative to the traditional non-Bayesian methods, specifically in the two contexts of health insurance and transmission risks for chronic wasting disease is outlined.

8 citations


Journal ArticleDOI
TL;DR: The authors discuss the translation of ideas into equations and assumptions, assessing the potential for psychological and social factors to affect the construction of models, and outline the strengths and weaknesses of recent advances in structured, group-based model construction that may accommodate a variety of understandings about cause and effect.
Abstract: Notionally objective probabilistic risk models, built around ideas of cause and effect, are used to predict impacts and evaluate trade-offs. In this paper, we focus on the use of expert judgement to fill gaps left by insufficient data and understanding. Psychological and contextual phenomena such as anchoring, availability bias, confirmation bias and overconfidence are pervasive and have powerful effects on individual judgements. Research across a range of fields has found that groups have access to more diverse information and ways of thinking about problems, and routinely outperform credentialled individuals on judgement and prediction tasks. In structured group elicitation, individuals make initial independent judgements, opinions are respected, participants consider the judgements made by others, and they may have the opportunity to reconsider and revise their initial estimates. Estimates may be aggregated using behavioural, mathematical or combined approaches. In contrast, mathematical modelers have been slower to accept that the host of psychological frailties and contextual biases that afflict judgements about parameters and events may also influence model assumptions and structures. Few, if any, quantitative risk analyses embrace sources of uncertainty comprehensively. However, several recent innovations aim to anticipate behavioural and social biases in model construction and to mitigate their effects. In this paper, we outline approaches to eliciting and combining alternative ideas of cause and effect. We discuss the translation of ideas into equations and assumptions, assessing the potential for psychological and social factors to affect the construction of models. We outline the strengths and weaknesses of recent advances in structured, group-based model construction that may accommodate a variety of understandings about cause and effect.

5 citations


Book ChapterDOI
01 Jan 2021
TL;DR: In this paper, a Bayesian model for analysing and aggregating structured expert judgement (sej) data of the form used by Roger Cooke's classical model has been developed, which has been built to create predictions over a common dataset, thereby allowing direct comparison between approaches.
Abstract: A Bayesian model for analysing and aggregating structured expert judgement (sej) data of the form used by Cooke’s classical model has been developed. The model has been built to create predictions over a common dataset, thereby allowing direct comparison between approaches. It deals with correlations between experts through clustering and also seeks to recalibrate judgements using the seed variables, in order to form an unbiased aggregated distribution over the target variables. Using the Delft database of sej studies, compiled by Roger Cooke, performance comparisons with the classical model demonstrate that this Bayesian approach provides similar median estimates but broader uncertainty bounds on the variables of interest. Cross-validation shows that these dynamics lead to the Bayesian model exhibiting higher statistical accuracy but lower information scores than the classical model. Comparisons of the combination scoring rule add further evidence to the robustness of the classical approach yet demonstrate outperformance of the Bayesian model in select cases.

3 citations


Book ChapterDOI
01 Jan 2021
Abstract: This chapter sets the background for when, and discusses the contexts in which, eliciting expert judgements is paramount. The way judgements are elicited and aggregated plays an essential part in distinguishing structured/formal elicitation protocols from informal ones. We emphasise the importance of properly reporting the steps and decision taken during an elicitation, and draw a parallel to the reporting of experimental designs underpinning the data collection. Directions for future research are proposed, and the chapter ends with an outline of the book.

2 citations


Book ChapterDOI
01 Jan 2021
TL;DR: In this paper, a research project funded by the Dutch Government saw the classical model developed and embedded in expert judgement procedures along with a Bayesian and a paired comparison method, which were instrumental in moving structured expert judgment procedures into the toolbox of risk analysts, particularly within Europe.
Abstract: Roger Cooke and his colleagues at the Delft University of Technology laid the foundations of the Classical Model for aggregating expert judgement in the 1980s. During 1985–1989, a research project funded by the Dutch Government saw the Classical Model developed and embedded in expert judgement procedures along with a Bayesian and a paired comparison method. That project and a subsequent working group report from the European Safety and Reliability Research and Development Association were instrumental in moving structured expert judgement procedures into the toolbox of risk analysts, particularly within Europe. As the number of applications grew, the Classical Model and its associated procedures came to dominate in applications. This chapter reflects on this early work and notes that almost all the principles and practices that underpin today’s applications were established in those early years.

Book ChapterDOI
20 Feb 2021
TL;DR: In this paper, the authors discuss the design and development of a training course in structured expert judgement (SEJ) and discuss the theoretical framework that guides the design of such a course.
Abstract: The chapter discusses the design and development of a training course in structured expert judgement (SEJ). We begin by setting the course in the context of previous experiences in training SEJ to postgraduates, early career researchers and consultants. We motivate our content, discussing the theoretical framework that guides the design of such a course. We describe our experiences in presenting the course on two occasions. Detailed analysis of the different course components—the learners/participants, the content, the context and the method, was carried out through surveys given to participants. This helped identify the successful course characteristics, which were then summarised in a customised design template that can be used to guide its conceptual, structural and navigation design.