Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
Amina Adadi,Mohammed Berrada +1 more
Reads0
Chats0
TLDR
This survey provides an entry point for interested researchers and practitioners to learn key aspects of the young and rapidly growing body of research related to XAI, and review the existing approaches regarding the topic, discuss trends surrounding its sphere, and present major research trajectories.Abstract:
At the dawn of the fourth industrial revolution, we are witnessing a fast and widespread adoption of artificial intelligence (AI) in our daily life, which contributes to accelerating the shift towards a more algorithmic society. However, even with such unprecedented advancements, a key impediment to the use of AI-based systems is that they often lack transparency. Indeed, the black-box nature of these systems allows powerful predictions, but it cannot be directly explained. This issue has triggered a new debate on explainable AI (XAI). A research field holds substantial promise for improving trust and transparency of AI-based systems. It is recognized as the sine qua non for AI to continue making steady progress without disruption. This survey provides an entry point for interested researchers and practitioners to learn key aspects of the young and rapidly growing body of research related to XAI. Through the lens of the literature, we review the existing approaches regarding the topic, discuss trends surrounding its sphere, and present major research trajectories.read more
Citations
More filters
Journal ArticleDOI
Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI
Alejandro Barredo Arrieta,Natalia Díaz-Rodríguez,Javier Del Ser,Javier Del Ser,Adrien Bennetot,Adrien Bennetot,Siham Tabik,Alberto Barbado,Salvador García,Sergio Gil-Lopez,Daniel Molina,Richard Benjamins,Raja Chatila,Francisco Herrera +13 more
TL;DR: In this paper, a taxonomy of recent contributions related to explainability of different machine learning models, including those aimed at explaining Deep Learning methods, is presented, and a second dedicated taxonomy is built and examined in detail.
Posted Content
Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI.
Alejandro Barredo Arrieta,Natalia Díaz-Rodríguez,Javier Del Ser,Javier Del Ser,Adrien Bennetot,Adrien Bennetot,Siham Tabik,Alberto Barbado,Salvador García,Sergio Gil-Lopez,Daniel Molina,Richard Benjamins,Raja Chatila,Francisco Herrera +13 more
TL;DR: Previous efforts to define explainability in Machine Learning are summarized, establishing a novel definition that covers prior conceptual propositions with a major focus on the audience for which explainability is sought, and a taxonomy of recent contributions related to the explainability of different Machine Learning models are proposed.
Journal ArticleDOI
Machine Learning Interpretability: A Survey on Methods and Metrics
TL;DR: A review of the current state of the research field on machine learning interpretability while focusing on the societal impact and on the developed methods and metrics is provided.
Journal ArticleDOI
Artificial Intelligence (AI) : Multidisciplinary perspectives on emerging challenges, opportunities, and agenda for research, practice and policy
Yogesh K. Dwivedi,Laurie Hughes,Elvira Ismagilova,Gert Aarts,Crispin Coombs,Tom Crick,Yanqing Duan,Rohita Dwivedi,John S. Edwards,Aled Eirug,Vassilis Galanos,P. Vigneswara Ilavarasan,Marijn Janssen,Paul Jones,Arpan Kumar Kar,Hatice Kizgin,Bianca Kronemann,Banita Lal,Biagio Lucini,Rony Medaglia,Kenneth Le Meunier-FitzHugh,Leslie Caroline Le Meunier-FitzHugh,Santosh K. Misra,Emmanuel Mogaji,Sujeet Kumar Sharma,Jang Bahadur Singh,Vishnupriya Raghavan,Ramakrishnan Raman,Nripendra P. Rana,Spyridon Samothrakis,Jak Spencer,Kuttimani Tamilmani,Annie Tubadji,Paul Walton,Michael D. Williams +34 more
TL;DR: This research offers significant and timely insight to AI technology and its impact on the future of industry and society in general, whilst recognising the societal and industrial influence on pace and direction of AI development.
Journal ArticleDOI
Explainable AI: A Review of Machine Learning Interpretability Methods
TL;DR: In this paper, a literature review and taxonomy of machine learning interpretability methods are presented, as well as links to their programming implementations, in the hope that this survey would serve as a reference point for both theorists and practitioners.
References
More filters
Posted Content
Rule Extraction Algorithm for Deep Neural Networks: A Review
TL;DR: This paper has thoroughly reviewed various rule extraction algorithms, considering the classification scheme: decompositional, pedagogical, and eclectics, and presents the evaluation of these algorithms based on the neural network structure with which the algorithm is intended to work.
Posted Content
Interpreting Deep Classifier by Visual Distillation of Dark Knowledge
TL;DR: This paper presents DarkSight, which visually summarizes the predictions of a classifier in a way inspired by notion of dark knowledge, and yields a new confidence measure based on dark knowledge by quantifying how unusual a given vector of predictions is.
Posted Content
Explain Yourself: A Natural Language Interface for Scrutable Autonomous Robots
Francisco J. Chiyah Garcia,David A. Robb,Xingkun Liu,Atanas Laskov,Pedro Patron,Helen Hastie +5 more
TL;DR: A natural language chat interface that enables vehicle behaviour to be queried by the user and obtains an interpretable model of autonomy through having an expert 'speak out-loud' and provide explanations during a mission.
Journal ArticleDOI
Artificial super intelligence: beyond rhetoric
TL;DR: Questions raised during a recent symposium on ‘‘Technological Displacement of White-collar Employment: Political and Social Implications’’ held at Cambridge University are seen as part of the wider debate on AI and existential risk, autonomous robots, Big Data and the Internet of Things.
Proceedings ArticleDOI
The role of emotion in self-explanations by cognitive agents
TL;DR: It is argued that emotions simulated based on cognitive appraisal theory enable the explanation of these emotions, using them as a heuristic to identify important beliefs and desires for the explanation, and the use of emotion words in the explanations themselves.