scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Cognitive and Human Factors in Expert Decision Making: Six Fallacies and the Eight Sources of Bias

08 Jun 2020-Analytical Chemistry (American Chemical Society)-Vol. 92, Iss: 12, pp 7998-8004
TL;DR: Eight sources of bias are discussed and conceptualized, and specific measures that can minimize these biases are concluded.
Abstract: Fallacies about the nature of biases have shadowed a proper cognitive understanding of biases and their sources, which in turn lead to ways that minimize their impact. Six such fallacies are presented: it is an ethical issue, only applies to "bad apples", experts are impartial and immune, technology eliminates bias, blind spot, and the illusion of control. Then, eight sources of bias are discussed and conceptualized within three categories: (A) factors that relate to the specific case and analysis, which include the data, reference materials, and contextual information, (B) factors that relate to the specific person doing the analysis, which include past experience base rates, organizational factors, education and training, and personal factors, and lastly, (C) cognitive architecture and human nature that impacts all of us. These factors can impact what the data are (e.g., how data are sampled and collected, or what is considered as noise and therefore disregarded), the actual results (e.g., decisions on testing strategies, how analysis is conducted, and when to stop testing), and the conclusions (e.g., interpretation of the results). The paper concludes with specific measures that can minimize these biases.
Citations
More filters
Journal ArticleDOI
TL;DR: Examination of death certificates issued during a 10-year period in the State of Nevada for children under the age of six and an experiment with 133 forensic pathologists demonstrate how extraneous information can result in cognitive bias in forensic pathology decision-making.
Abstract: Forensic pathologists' decisions are critical in police investigations and court proceedings as they determine whether an unnatural death of a young child was an accident or homicide. Does cognitive bias affect forensic pathologists' decision-making? To address this question, we examined all death certificates issued during a 10-year period in the State of Nevada in the United States for children under the age of six. We also conducted an experiment with 133 forensic pathologists in which we tested whether knowledge of irrelevant non-medical information that should have no bearing on forensic pathologists' decisions influenced their manner of death determinations. The dataset of death certificates indicated that forensic pathologists were more likely to rule "homicide" rather than "accident" for deaths of Black children relative to White children. This may arise because the base-rate expectation creates an a priori cognitive bias to rule that Black children died as a result of homicide, which then perpetuates itself. Corroborating this explanation, the experimental data with the 133 forensic pathologists exhibited biased decisions when given identical medical information but different irrelevant non-medical information about the race of the child and who was the caregiver who brought them to the hospital. These findings together demonstrate how extraneous information can result in cognitive bias in forensic pathology decision-making.

50 citations

Journal ArticleDOI
06 Sep 2020
TL;DR: Four common flaws in error rate studies are presented and a corrected experimental design is provided that quantifies error rates more accurately, undermining the credibility and accuracy of error rates reported in studies.
Abstract: Forensic science error rate studies have not given sufficient attention or weight to inconclusive evidence and inconclusive decisions. Inconclusive decisions can be correct decisions, but they can also be incorrect decisions. Errors can occur when inconclusive evidence is determined as an identification or exclusion, or conversely, when same- or different-source evidence is incorrectly determined as inconclusive. We present four common flaws in error rate studies: 1. Not including test items which are more prone to error; 2. Excluding inconclusive decisions from error rate calculations; 3. Counting inconclusive decisions as correct in error rate calculations; and 4. Examiners resorting to more inconclusive decisions during error rate studies than they do in casework. These flaws seriously undermine the credibility and accuracy of error rates reported in studies. To remedy these shortcomings, we present the problems and show the way forward by providing a corrected experimental design that quantifies error rates more accurately.

35 citations

Journal ArticleDOI
23 Sep 2020
TL;DR: This work forms the first of a two part series discussing why the digital forensics discipline and its organisations should conduct peer review in their laboratories, what it should review as part of this process, and how this should be undertaken.
Abstract: The importance of peer review in the field of digital forensics cannot be underestimated as it often forms the primary, and sometimes only form of quality assurance process an organisation will apply to their practitioners' casework. Whilst there is clear value in the peer review process, it remains an area which is arguably undervalued and under-researched, where little academic and industrial commentary can be found describing best practice approaches. This work forms the first of a two part series discussing why the digital forensics discipline and its organisations should conduct peer review in their laboratories, what it should review as part of this process, and how this should be undertaken. Here in part one, a critical review of the need to peer review is offered along with a discussion of the limitations of existing peer review mechanisms. Finally, the ‘Peer Review Hierarchy’ is offered, outlining the seven levels of peer review available for reviewing practitioner findings.

16 citations


Cites background from "Cognitive and Human Factors in Expe..."

  • ...QC is reactive, and focused on controlling that quality requirements have been fulfilled (Doyle, 2019)....

    [...]

  • ...It is also useful to distinguish between two different dimensions of quality management: Quality Assurance (QA), and Quality Control (QC) QA is a preventative enterprise, which is focused on ensuring that quality requirements will be fulfilled (Doyle, 2019)....

    [...]

Journal ArticleDOI
TL;DR: In this review, fundamental considerations for any analyses are first presented as generic context, and Mechanics of sensory measurement are briefly explained with a focus on the key compound classes in dairy.

15 citations

References
More filters
Journal ArticleDOI

7,489 citations

Journal ArticleDOI
TL;DR: Confirmation bias, as the term is typically used in the psychological literature, connotes the seeking or interpreting of evidence in ways that are partial to existing beliefs, expectations, or a h...
Abstract: Confirmation bias, as the term is typically used in the psychological literature, connotes the seeking or interpreting of evidence in ways that are partial to existing beliefs, expectations, or a h...

5,214 citations

Journal ArticleDOI
TL;DR: This paper found that social observers tend to perceive a "false consensus" with respect to the relative commonness of their own responses, and a related bias was found to exist in the observers' social inferences.

2,737 citations

Journal ArticleDOI
TL;DR: In this paper, the inverse relationship between perceived risk and perceived benefit was examined and it was shown that people rely on affect when judging the risk and benefit of specific hazards, such as nuclear power.
Abstract: This paper re-examines the commonly observed inverse relationship between perceived risk and perceived benefit. We propose that this relationship occurs because people rely on affect when judging the risk and benefit of specific hazards. Evidence supporting this proposal is obtained in two experimental studies. Study 1 investigated the inverse relationship between risk and benefit judgments under a time-pressure condition designed to limit the use of analytic thought and enhance the reliance on affect. As expected, the inverse relationship was strengthened when time pressure was introduced. Study 2 tested and confirmed the hypothesis that providing information designed to alter the favorability of one's overall affective evaluation of an item (say nuclear power) would systematically change the risk and benefit judgments for that item. Both studies suggest that people seem prone to using an ‘affect heuristic’ which improves judgmental efficiency by deriving both risk and benefit evaluations from a common source—affective reactions to the stimulus item. Copyright © 2000 John Wiley & Sons, Ltd.

2,525 citations

Trending Questions (1)
What are the Human Factors in Decision Making?

The paper discusses eight sources of bias in decision making, which include factors related to the specific case and analysis, factors related to the specific person doing the analysis, and factors related to cognitive architecture and human nature.