scispace - formally typeset
Book ChapterDOI

DeepRED – Rule Extraction from Deep Neural Networks

Reads0
Chats0
TLDR
A new decompositional algorithm – DeepRED – is introduced that is able to extract rules from deep neural networks that are easy to understand and understandable.
Abstract
Neural network classifiers are known to be able to learn very accurate models. In the recent past, researchers have even been able to train neural networks with multiple hidden layers (deep neural networks) more effectively and efficiently. However, the major downside of neural networks is that it is not trivial to understand the way how they derive their classification decisions. To solve this problem, there has been research on extracting better understandable rules from neural networks. However, most authors focus on nets with only one single hidden layer. The present paper introduces a new decompositional algorithm – DeepRED – that is able to extract rules from deep neural networks.

read more

Citations
More filters
Posted Content

Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI.

TL;DR: Previous efforts to define explainability in Machine Learning are summarized, establishing a novel definition that covers prior conceptual propositions with a major focus on the audience for which explainability is sought, and a taxonomy of recent contributions related to the explainability of different Machine Learning models are proposed.
Proceedings ArticleDOI

Explaining Explanations: An Overview of Interpretability of Machine Learning

TL;DR: In an effort to create best practices and identify open challenges, the authors describe foundational concepts of explainability and show how they can be used to classify existing literature, and discuss why current approaches to explanatory methods especially for deep neural networks are insufficient.
Proceedings ArticleDOI

Trends and Trajectories for Explainable, Accountable and Intelligible Systems: An HCI Research Agenda

TL;DR: This work investigates how HCI researchers can help to develop accountable systems by performing a literature analysis of 289 core papers on explanations and explaina-ble systems, as well as 12,412 citing papers.
Posted Content

Explaining Explanations: An Overview of Interpretability of Machine Learning

TL;DR: In an effort to create best practices and identify open challenges, the authors provide a definition of explainability and show how it can be used to classify existing literature, and discuss why current approaches to explanatory methods especially for deep neural networks are insufficient.
References
More filters
Journal ArticleDOI

Gradient-based learning applied to document recognition

TL;DR: In this article, a graph transformer network (GTN) is proposed for handwritten character recognition, which can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters.
Book

C4.5: Programs for Machine Learning

TL;DR: A complete guide to the C4.5 system as implemented in C for the UNIX environment, which starts from simple core learning methods and shows how they can be elaborated and extended to deal with typical problems such as missing data and over hitting.
Journal ArticleDOI

Survey and critique of techniques for extracting rules from trained artificial neural networks

TL;DR: This survey focuses on mechanisms, procedures, and algorithms designed to insert knowledge into ANNs, extract rules from trained ANNs (rule extraction), and utilise ANNs to refine existing rule bases (rule refinement).
Proceedings Article

Extracting Tree-Structured Representations of Trained Networks

TL;DR: This work presents a novel algorithm, TREPAN, for extracting comprehensible, symbolic representations from trained neural networks, which is general in its applicability and scales well to large networks and problems with high-dimensional input spaces.
Journal ArticleDOI

Extracting Refined Rules from Knowledge-Based Neural Networks

TL;DR: This article proposes and empirically evaluates a method for the final, and possibly most difficult, step of the refinement of existing knowledge and demonstrates that neural networks can be used to effectively refine symbolic knowledge.
Related Papers (5)