scispace - formally typeset
Open AccessJournal ArticleDOI

Definitions, methods, and applications in interpretable machine learning.

Reads0
Chats0
TLDR
The authors define interpretability in the context of machine learning and introduce the predictive, descriptive, relevant (PDR) framework for discussing interpretations, with three overarching desiderata for evaluation: predictive accuracy, descriptive accuracy, and relevancy, with relevance judged relative to a human audience.
Abstract
Machine-learning models have demonstrated great success in learning complex patterns that enable them to make predictions about unobserved data. In addition to using models for prediction, the ability to interpret what a model has learned is receiving an increasing amount of attention. However, this increased focus has led to considerable confusion about the notion of interpretability. In particular, it is unclear how the wide array of proposed interpretation methods are related and what common concepts can be used to evaluate them. We aim to address these concerns by defining interpretability in the context of machine learning and introducing the predictive, descriptive, relevant (PDR) framework for discussing interpretations. The PDR framework provides 3 overarching desiderata for evaluation: predictive accuracy, descriptive accuracy, and relevancy, with relevancy judged relative to a human audience. Moreover, to help manage the deluge of interpretation methods, we introduce a categorization of existing techniques into model-based and post hoc categories, with subgroups including sparsity, modularity, and simulatability. To demonstrate how practitioners can use the PDR framework to evaluate and understand interpretations, we provide numerous real-world examples. These examples highlight the often underappreciated role played by human audiences in discussions of interpretability. Finally, based on our framework, we discuss limitations of existing methods and directions for future work. We hope that this work will provide a common vocabulary that will make it easier for both practitioners and researchers to discuss and choose from the full range of interpretation methods.

read more

Content maybe subject to copyright    Report

Citations
More filters
Posted Content

Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI.

TL;DR: Previous efforts to define explainability in Machine Learning are summarized, establishing a novel definition that covers prior conceptual propositions with a major focus on the audience for which explainability is sought, and a taxonomy of recent contributions related to the explainability of different Machine Learning models are proposed.
Journal ArticleDOI

Explainable AI: A Review of Machine Learning Interpretability Methods

TL;DR: In this paper, a literature review and taxonomy of machine learning interpretability methods are presented, as well as links to their programming implementations, in the hope that this survey would serve as a reference point for both theorists and practitioners.
Journal ArticleDOI

Explainable Machine Learning for Scientific Insights and Discoveries

TL;DR: In this paper, the authors provide a survey of recent scientific works that incorporate machine learning and the way that explainable machine learning is used in combination with domain knowledge from the application areas.
Journal ArticleDOI

A Survey on the Explainability of Supervised Machine Learning

TL;DR: This survey paper provides essential definitions, an overview of the different principles and methodologies of explainable Supervised Machine Learning, and a state-of-the-art survey that reviews past and recent explainable SML approaches and classifies them according to the introduced definitions.
Journal ArticleDOI

Drug discovery with explainable artificial intelligence

TL;DR: A review of the most prominent algorithmic concepts of explainable artificial intelligence, and forecasts future opportunities, potential applications as well as several remaining challenges is provided in this article. But, the review is limited to the use of deep learning for drug discovery.
References
More filters
Journal ArticleDOI

Random Forests

TL;DR: Internal estimates monitor error, strength, and correlation and these are used to show the response to increasing the number of features used in the forest, and are also applicable to regression.
Journal ArticleDOI

Regression Shrinkage and Selection via the Lasso

TL;DR: A new method for estimation in linear models called the lasso, which minimizes the residual sum of squares subject to the sum of the absolute value of the coefficients being less than a constant, is proposed.
Proceedings ArticleDOI

Histograms of oriented gradients for human detection

TL;DR: It is shown experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection, and the influence of each stage of the computation on performance is studied.
Book

ggplot2: Elegant Graphics for Data Analysis

TL;DR: This book describes ggplot2, a new data visualization package for R that uses the insights from Leland Wilkisons Grammar of Graphics to create a powerful and flexible system for creating data graphics.
Related Papers (5)