Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
Amina Adadi,Mohammed Berrada +1 more
Reads0
Chats0
TLDR
This survey provides an entry point for interested researchers and practitioners to learn key aspects of the young and rapidly growing body of research related to XAI, and review the existing approaches regarding the topic, discuss trends surrounding its sphere, and present major research trajectories.Abstract:
At the dawn of the fourth industrial revolution, we are witnessing a fast and widespread adoption of artificial intelligence (AI) in our daily life, which contributes to accelerating the shift towards a more algorithmic society. However, even with such unprecedented advancements, a key impediment to the use of AI-based systems is that they often lack transparency. Indeed, the black-box nature of these systems allows powerful predictions, but it cannot be directly explained. This issue has triggered a new debate on explainable AI (XAI). A research field holds substantial promise for improving trust and transparency of AI-based systems. It is recognized as the sine qua non for AI to continue making steady progress without disruption. This survey provides an entry point for interested researchers and practitioners to learn key aspects of the young and rapidly growing body of research related to XAI. Through the lens of the literature, we review the existing approaches regarding the topic, discuss trends surrounding its sphere, and present major research trajectories.read more
Citations
More filters
Journal ArticleDOI
Machine Learning-Based Statistical Approach to Analyze Process Dependencies on Threshold Voltage in Recessed Gate AlGaN/GaN MIS-HEMTs
Tian-Li Wu,Sayeem Bin Kutub +1 more
TL;DR: The use of a machine learning (ML)-based statistical approach to model and analyze the impact of the fabrication processes on the threshold voltage in recessed gate AlGaN/GaN metal-insulator-semiconductor high electron mobility transistors shows a nice agreement with the measured threshold voltage.
ReportDOI
Psychological Foundations of Explainability and Interpretability in Artificial Intelligence
TL;DR: It is made the case that interpretability and explainability are distinct requirements for machine learning systems and that humans differ from one another in systematic ways, that affect the extent to which they prefer to make decisions based on detailed explanations versus less precise interpretations.
Journal ArticleDOI
Deep Learning for Fault Diagnostics in Bearings, Insulators, PV Panels, Power Lines, and Electric Vehicle Applications—The State-of-the-Art Approaches
K. Mohana Sundaram,Azham Hussain,P. Sanjeevikumar,Jens Bo Holm-Nielsen,Vishnu Kumar Kaliappan,B. Kavya Santhoshi +5 more
TL;DR: In this paper, the authors put forward the importance of DL and its application in a few critical electrical segments, such as identification of bearing faults, hot spots on the surface of PV panels, insulator faults, an inspection of power lines and electric vehicles.
Journal ArticleDOI
Analysis of Travel Mode Choice in Seoul Using an Interpretable Machine Learning Approach
TL;DR: This paper proposes an interpretable ML approach to improve the interpretability (i.e., the degree of understanding the cause of decisions) of ML concerning travel mode choice modeling, and applies it to national household travel survey data in Seoul.
Journal ArticleDOI
Visual interpretability in 3D brain tumor segmentation network
TL;DR: In this paper, a post-hoc interpretability technique was used to analyze the 3D brain tumor segmentation model by extending a post hoc interpretability approach over gradient-based approaches, and the authors also evaluated the interpretability methodology quantitatively for medical image segmentation tasks.
References
More filters
Proceedings Article
Distributed Representations of Words and Phrases and their Compositionality
TL;DR: This paper presents a simple method for finding phrases in text, and shows that learning good vector representations for millions of phrases is possible and describes a simple alternative to the hierarchical softmax called negative sampling.
Posted Content
Distilling the Knowledge in a Neural Network
TL;DR: This work shows that it can significantly improve the acoustic model of a heavily used commercial system by distilling the knowledge in an ensemble of models into a single model and introduces a new type of ensemble composed of one or more full models and many specialist models which learn to distinguish fine-grained classes that the full models confuse.
Book ChapterDOI
Visualizing and Understanding Convolutional Networks
Matthew D. Zeiler,Rob Fergus +1 more
TL;DR: A novel visualization technique is introduced that gives insight into the function of intermediate feature layers and the operation of the classifier in large Convolutional Network models, used in a diagnostic role to find model architectures that outperform Krizhevsky et al on the ImageNet classification benchmark.
Proceedings ArticleDOI
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
TL;DR: In this article, the authors propose LIME, a method to explain models by presenting representative individual predictions and their explanations in a non-redundant way, framing the task as a submodular optimization problem.
Journal ArticleDOI
Mastering the game of Go without human knowledge
David Silver,Julian Schrittwieser,Karen Simonyan,Ioannis Antonoglou,Aja Huang,Arthur Guez,Thomas Hubert,Lucas Baker,Matthew Lai,Adrian Bolton,Yutian Chen,Timothy P. Lillicrap,Fan Hui,Laurent Sifre,George van den Driessche,Thore Graepel,Demis Hassabis +16 more
TL;DR: An algorithm based solely on reinforcement learning is introduced, without human data, guidance or domain knowledge beyond game rules, that achieves superhuman performance, winning 100–0 against the previously published, champion-defeating AlphaGo.