scispace - formally typeset
Book ChapterDOI

Using Perceptual and Cognitive Explanations for Enhanced Human-Agent Team Performance

TLDR
A perceptual-cognitive explanation (PeCoX) framework for the development of explanations that address both the perceptual and cognitive foundations of an agent’s behavior, distinguishing between explanation generation, communication and reception is proposed.
Abstract
Most explainable AI (XAI) research projects focus on well-delineated topics, such as interpretability of machine learning outcomes, knowledge sharing in a multi-agent system or human trust in agent’s performance. For the development of explanations in human-agent teams, a more integrative approach is needed. This paper proposes a perceptual-cognitive explanation (PeCoX) framework for the development of explanations that address both the perceptual and cognitive foundations of an agent’s behavior, distinguishing between explanation generation, communication and reception. It is a generic framework (i.e., the core is domain-agnostic and the perceptual layer is model-agnostic), and being developed and tested in the domains of transport, health-care and defense. The perceptual level entails the provision of an Intuitive Confidence Measure and the identification of the “foil” in a contrastive explanation. The cognitive level entails the selection of the beliefs, goals and emotions for explanations. Ontology Design Patterns are being constructed for the reasoning and communication, whereas Interaction Design Patterns are being constructed for the shaping of the multimodal communication. First results show (1) positive effects on human’s understanding of the perceptual and cognitive foundation of agent’s behavior, and (2) the need for harmonizing the explanations to the context and human’s information processing capabilities.

read more

Citations
More filters
Journal ArticleDOI

Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)

Amina Adadi, +1 more
- 17 Sep 2018 - 
TL;DR: This survey provides an entry point for interested researchers and practitioners to learn key aspects of the young and rapidly growing body of research related to XAI, and review the existing approaches regarding the topic, discuss trends surrounding its sphere, and present major research trajectories.
Posted Content

Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI.

TL;DR: Previous efforts to define explainability in Machine Learning are summarized, establishing a novel definition that covers prior conceptual propositions with a major focus on the audience for which explainability is sought, and a taxonomy of recent contributions related to the explainability of different Machine Learning models are proposed.
Proceedings Article

Explainable Agents and Robots: Results from a Systematic Literature Review

TL;DR: A Systematic Literature Review of eXplainable Artificial Intelligence (XAI), finding that almost all of the studied papers deal with robots/agents explaining their behaviors to the human users, and very few works addressed inter-robot (inter-agent) explainability.
Journal ArticleDOI

A Survey of Contrastive and Counterfactual Explanation Generation Methods for Explainable Artificial Intelligence

TL;DR: In this article, a systematic literature review of contrastive and counterfactual explanations of artificial intelligence algorithms is presented, which provides readers with a thorough and reproducible analysis of the interdisciplinary research field under study.
References
More filters
Proceedings ArticleDOI

"Why Should I Trust You?": Explaining the Predictions of Any Classifier

TL;DR: In this article, the authors propose LIME, a method to explain models by presenting representative individual predictions and their explanations in a non-redundant way, framing the task as a submodular optimization problem.
Journal ArticleDOI

Explanation in artificial intelligence: Insights from the social sciences

TL;DR: This paper argues that the field of explainable artificial intelligence should build on existing research, and reviews relevant papers from philosophy, cognitive psychology/science, and social psychology, which study these topics, and draws out some important findings.
BookDOI

Handbook on Ontologies

Steffen Staab, +1 more
TL;DR: The Handbook on Ontologies provides a comprehensive overview of the current status and future prospectives of the field of ontologies considering ontology languages, ontology engineering methods, example ontologies, infrastructures and technologies for ontology, and how to bring this all into ontology-based infrastructureures and applications that are among the best of their kind.
Journal ArticleDOI

Explanation and Understanding

TL;DR: The study of explanation, while related to intuitive theories, concepts, and mental models, offers important new perspectives on high-level thought and particularly adept at using situational support to build explanations on the fly in real time.
Posted Content

Explanation in Artificial Intelligence: Insights from the Social Sciences

TL;DR: There has been a recent resurgence in the area of explainable artificial intelligence as researchers and practitioners seek to make their algorithms more understandable as mentioned in this paper, and it should not be controversial to say that looking at how humans explain to each other can serve as a useful starting point for explanation in artificial intelligence.
Related Papers (5)