scispace - formally typeset
Journal ArticleDOI

What makes classification trees comprehensible

Reads0
Chats0
TLDR
This paper systematically investigates how tree structure parameters (the number of leaves, branching factor, tree depth) and visualisation properties influence the tree comprehensibility and proposes two new comprehensibility metrics that consider the semantics of the tree in addition to the tree structure itself.
Abstract
In-depth survey for empirical study of the classification-tree comprehensibility.Objective measurements suggest the most influential parameter: the depth of leaves.Number of leaves is a relevant comprehensibility measure only for complex trees.Tree visualization style and layout significantly influence the comprehensibility.Proposed 2 comprehensibility measures considering semantics and structure of the tree. Classification trees are attractive for practical applications because of their comprehensibility. However, the literature on the parameters that influence their comprehensibility and usability is scarce. This paper systematically investigates how tree structure parameters (the number of leaves, branching factor, tree depth) and visualisation properties influence the tree comprehensibility. In addition, we analyse the influence of the question depth (the depth of the deepest leaf that is required when answering a question about a classification tree), which turns out to be the most important parameter, even though it is usually overlooked. The analysis is based on empirical data that is obtained using a carefully designed survey with 98 questions answered by 69 respondents. The paper evaluates several tree-comprehensibility metrics and proposes two new metrics (the weighted sum of the depths of leaves and the weighted sum of the branching factors on the paths from the root to the leaves) that are supported by the survey results. The main advantage of the new comprehensibility metrics is that they consider the semantics of the tree in addition to the tree structure itself.

read more

Citations
More filters
Journal ArticleDOI

Interpretability of machine learning‐based prediction models in healthcare

TL;DR: In this article, the authors give an overview of interpretability approaches and provide examples of practical interpretability of machine learning in different areas of healthcare, including prediction of health-related outcomes, optimizing treatments or improving the efficiency of screening for specific conditions.
Journal ArticleDOI

A historical perspective of explainable Artificial Intelligence

TL;DR: A historical perspective of explainability in AI is presented and criteria for explanations are proposed that are believed to play a crucial role in the development of human‐understandable explainable systems.
Book ChapterDOI

Perturbation-Based Explanations of Prediction Models

TL;DR: Practical issues and challenges in applying the explanation methodology in a business context are illustrated on a practical use case of B2B sales forecasting in a company and it is demonstrated how explanations can be used as a what-if analysis tool to answer relevant business questions.
Journal ArticleDOI

The Pragmatic Turn in Explainable Artificial Intelligence (XAI)

TL;DR: In this paper, the authors argue that the search for explainable models and interpretable decisions in AI must be reformulated in terms of the broader project of offering a pragmatic and naturalistic account of understanding in AI.
Journal ArticleDOI

Opening the Black Box: The Promise and Limitations of Explainable Machine Learning in Cardiology

TL;DR: In this paper , a review of explainable machine learning techniques for cardiology is presented, focusing on how the nature of explanations as approximations may omit important information about how black-box models work and why they make certain predictions.
References
More filters
Book

C4.5: Programs for Machine Learning

TL;DR: A complete guide to the C4.5 system as implemented in C for the UNIX environment, which starts from simple core learning methods and shows how they can be elaborated and extended to deal with typical problems such as missing data and over hitting.
Journal ArticleDOI

The WEKA data mining software: an update

TL;DR: This paper provides an introduction to the WEKA workbench, reviews the history of the project, and, in light of the recent 3.6 stable release, briefly discusses what has been added since the last stable version (Weka 3.4) released in 2003.
Journal ArticleDOI

Cognitive load during problem solving: Effects on learning

TL;DR: It is suggested that a major reason for the ineffectiveness of problem solving as a learning device, is that the cognitive processes required by the two activities overlap insufficiently, and that conventional problem solving in the form of means-ends analysis requires a relatively large amount of cognitive processing capacity which is consequently unavailable for schema acquisition.
Book

Data Mining and Knowledge Discovery Handbook

Oded Maimon, +1 more
TL;DR: This book first surveys, then provides comprehensive yet concise algorithmic descriptions of methods, including classic methods plus the extensions and novel methods developed recently.
Related Papers (5)