Open AccessProceedings Article
Anchors: High-Precision Model-Agnostic Explanations
Marco Tulio Ribeiro,Sameer Singh,Carlos Guestrin +2 more
- Vol. 32, Iss: 1, pp 1527-1535
Reads0
Chats0
TLDR
This work introduces a novel model-agnostic system that explains the behavior of complex models with high-precision rules called anchors, representing local, “sufficient” conditions for predictions, and proposes an algorithm to efficiently compute these explanations for any black-box model with high probability guarantees.Abstract:
We introduce a novel model-agnostic system that explains the behavior of complex models with high-precision rules called anchors, representing local, "sufficient" conditions for predictions. We propose an algorithm to efficiently compute these explanations for any black-box model with high-probability guarantees. We demonstrate the flexibility of anchors by explaining a myriad of different models for different domains and tasks. In a user study, we show that anchors enable users to predict how a model would behave on unseen instances with less effort and higher precision, as compared to existing linear explanations or no explanations.read more
Citations
More filters
Journal ArticleDOI
Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI
Alejandro Barredo Arrieta,Natalia Díaz-Rodríguez,Javier Del Ser,Javier Del Ser,Adrien Bennetot,Adrien Bennetot,Siham Tabik,Alberto Barbado,Salvador García,Sergio Gil-Lopez,Daniel Molina,Richard Benjamins,Raja Chatila,Francisco Herrera +13 more
TL;DR: In this paper, a taxonomy of recent contributions related to explainability of different machine learning models, including those aimed at explaining Deep Learning methods, is presented, and a second dedicated taxonomy is built and examined in detail.
Journal ArticleDOI
A Survey of Methods for Explaining Black Box Models
Riccardo Guidotti,Anna Monreale,Salvatore Ruggieri,Franco Turini,Fosca Giannotti,Dino Pedreschi +5 more
TL;DR: In this paper, the authors provide a classification of the main problems addressed in the literature with respect to the notion of explanation and the type of black box decision support systems, given a problem definition, a black box type, and a desired explanation, this survey should help the researcher to find the proposals more useful for his own work.
Journal ArticleDOI
From Local Explanations to Global Understanding with Explainable AI for Trees.
Scott M. Lundberg,Scott M. Lundberg,Gabriel G. Erion,Hugh Chen,Alex J. DeGrave,Jordan M. Prutkin,Bala G. Nair,Ronit Katz,Jonathan Himmelfarb,Nisha Bansal,Su-In Lee +10 more
TL;DR: An explanation method for trees is presented that enables the computation of optimal local explanations for individual predictions, and the authors demonstrate their method on three medical datasets.
Journal ArticleDOI
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
Amina Adadi,Mohammed Berrada +1 more
TL;DR: This survey provides an entry point for interested researchers and practitioners to learn key aspects of the young and rapidly growing body of research related to XAI, and review the existing approaches regarding the topic, discuss trends surrounding its sphere, and present major research trajectories.
Posted Content
Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI.
Alejandro Barredo Arrieta,Natalia Díaz-Rodríguez,Javier Del Ser,Javier Del Ser,Adrien Bennetot,Adrien Bennetot,Siham Tabik,Alberto Barbado,Salvador García,Sergio Gil-Lopez,Daniel Molina,Richard Benjamins,Raja Chatila,Francisco Herrera +13 more
TL;DR: Previous efforts to define explainability in Machine Learning are summarized, establishing a novel definition that covers prior conceptual propositions with a major focus on the audience for which explainability is sought, and a taxonomy of recent contributions related to the explainability of different Machine Learning models are proposed.
References
More filters
Book
Elements of information theory
Thomas M. Cover,Joy A. Thomas +1 more
TL;DR: The author examines the role of entropy, inequality, and randomness in the design of codes and the construction of codes in the rapidly changing environment.
Proceedings ArticleDOI
Going deeper with convolutions
Christian Szegedy,Wei Liu,Yangqing Jia,Pierre Sermanet,Scott Reed,Dragomir Anguelov,Dumitru Erhan,Vincent Vanhoucke,Andrew Rabinovich +8 more
TL;DR: Inception as mentioned in this paper is a deep convolutional neural network architecture that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).
Proceedings ArticleDOI
Glove: Global Vectors for Word Representation
TL;DR: A new global logbilinear regression model that combines the advantages of the two major model families in the literature: global matrix factorization and local context window methods and produces a vector space with meaningful substructure.
Proceedings ArticleDOI
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
TL;DR: In this article, the authors propose LIME, a method to explain models by presenting representative individual predictions and their explanations in a non-redundant way, framing the task as a submodular optimization problem.
Journal ArticleDOI
Multi-label classification: An overview
TL;DR: The task of multi-label classification is introduced, the sparse related literature is organizes into a structured presentation and comparative experimental results of certain multilabel classification methods are performed.