scispace - formally typeset
Open AccessJournal ArticleDOI

Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)

Amina Adadi, +1 more
- 17 Sep 2018 - 
- Vol. 6, pp 52138-52160
Reads0
Chats0
TLDR
This survey provides an entry point for interested researchers and practitioners to learn key aspects of the young and rapidly growing body of research related to XAI, and review the existing approaches regarding the topic, discuss trends surrounding its sphere, and present major research trajectories.
Abstract
At the dawn of the fourth industrial revolution, we are witnessing a fast and widespread adoption of artificial intelligence (AI) in our daily life, which contributes to accelerating the shift towards a more algorithmic society. However, even with such unprecedented advancements, a key impediment to the use of AI-based systems is that they often lack transparency. Indeed, the black-box nature of these systems allows powerful predictions, but it cannot be directly explained. This issue has triggered a new debate on explainable AI (XAI). A research field holds substantial promise for improving trust and transparency of AI-based systems. It is recognized as the sine qua non for AI to continue making steady progress without disruption. This survey provides an entry point for interested researchers and practitioners to learn key aspects of the young and rapidly growing body of research related to XAI. Through the lens of the literature, we review the existing approaches regarding the topic, discuss trends surrounding its sphere, and present major research trajectories.

read more

Citations
More filters
Posted Content

Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI.

TL;DR: Previous efforts to define explainability in Machine Learning are summarized, establishing a novel definition that covers prior conceptual propositions with a major focus on the audience for which explainability is sought, and a taxonomy of recent contributions related to the explainability of different Machine Learning models are proposed.
Journal ArticleDOI

Machine Learning Interpretability: A Survey on Methods and Metrics

TL;DR: A review of the current state of the research field on machine learning interpretability while focusing on the societal impact and on the developed methods and metrics is provided.
Journal ArticleDOI

Artificial Intelligence (AI) : Multidisciplinary perspectives on emerging challenges, opportunities, and agenda for research, practice and policy

TL;DR: This research offers significant and timely insight to AI technology and its impact on the future of industry and society in general, whilst recognising the societal and industrial influence on pace and direction of AI development.
Journal ArticleDOI

Explainable AI: A Review of Machine Learning Interpretability Methods

TL;DR: In this paper, a literature review and taxonomy of machine learning interpretability methods are presented, as well as links to their programming implementations, in the hope that this survey would serve as a reference point for both theorists and practitioners.
References
More filters
Posted Content

Auditing Black-Box Models Using Transparent Model Distillation With Side Information

TL;DR: Distill-and-compare as discussed by the authors is a model distillation and comparison approach to audit black-box risk scoring models, where the student model is trained with distillation to a second un-distilled transparent model trained on ground-truth outcomes.
Book ChapterDOI

Visualizing the Feature Importance for Black Box Models

TL;DR: In this paper, the authors introduce local feature importance as a local version of a recent model-agnostic global feature importance method and propose two visual tools: partial importance (PI) and individual conditional importance (ICI) plots which visualize how changes in a feature affect the model performance on average.
Posted Content

Price of Transparency in Strategic Machine Learning.

TL;DR: The impact of strategic intent of the users on the design and performance of transparent ML algorithms is studied by quantitatively studying the price of transparency in the context of strategic classification algorithms by modeling the problem as a nonzero-sum game between the users and the algorithm designer.
Posted Content

"I know it when I see it". Visualization and Intuitive Interpretability

TL;DR: It is shown that visualization enables but also impedes intuitive interpretability, as it presupposes two levels of technical pre-interpretation: dimensionality reduction and regularization, and argued that the use of positive concepts to emulate the distributed semantic structure of machine learning models introduces a significant human bias into the model.
Journal ArticleDOI

Extracting State Models for Black-Box Software Components.

TL;DR: This work proposes a novel black-box approach to reverse engineer the state model of software components that generates state models with sucient accuracy and completeness for components with services that either require no input data parameters or require parameters with small set of values.
Related Papers (5)