scispace - formally typeset
Open AccessProceedings Article

GNNExplainer: Generating Explanations for Graph Neural Networks

Reads0
Chats0
TLDR
GNNExplainer as mentioned in this paper identifies a compact subgraph structure and a small subset of node features that have a crucial role in GNN's prediction, and generates consistent and concise explanations for an entire class of instances.
Abstract
Graph Neural Networks (GNNs) are a powerful tool for machine learning on graphs. GNNs combine node feature information with the graph structure by recursively passing neural messages along edges of the input graph. However, incorporating both graph structure and feature information leads to complex models and explaining predictions made by GNNs remains unsolved. Here we propose GnnExplainer, the first general, model-agnostic approach for providing interpretable explanations for predictions of any GNN-based model on any graph-based machine learning task. Given an instance, GnnExplainer identifies a compact subgraph structure and a small subset of node features that have a crucial role in GNN's prediction. Further, GnnExplainer can generate consistent and concise explanations for an entire class of instances. We formulate GnnExplainer as an optimization task that maximizes the mutual information between a GNN's prediction and distribution of possible subgraph structures. Experiments on synthetic and real-world graphs show that our approach can identify important graph structures as well as node features, and outperforms alternative baseline approaches by up to 43.0% in explanation accuracy. GnnExplainer provides a variety of benefits, from the ability to visualize semantically relevant structures to interpretability, to giving insights into errors of faulty GNNs.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

Explaining Deep Neural Networks and Beyond: A Review of Methods and Applications

TL;DR: In this paper, the authors provide a timely overview of explainable AI, with a focus on 'post-hoc' explanations, explain its theoretical foundations, and put interpretability algorithms to a test both from a theory and comparative evaluation perspective using extensive simulations.
Journal ArticleDOI

Explaining Deep Neural Networks and Beyond: A Review of Methods and Applications

TL;DR: In this paper, the authors provide a timely overview of post hoc explanations and explain its theoretical foundations, and put interpretability algorithms to a test both from a theory and comparative evaluation perspective using extensive simulations, and demonstrate successful usage of XAI in a representative selection of application scenarios.
Journal ArticleDOI

Drug discovery with explainable artificial intelligence

TL;DR: A review of the most prominent algorithmic concepts of explainable artificial intelligence, and forecasts future opportunities, potential applications as well as several remaining challenges is provided in this article. But, the review is limited to the use of deep learning for drug discovery.
Posted Content

Network Medicine Framework for Identifying Drug Repurposing Opportunities for COVID-19

TL;DR: Three network-based drug repurposing strategies are deployed, relying on network proximity, diffusion, and AI-based metrics, allowing to rank all approved drugs based on their likely efficacy for COVID-19 patients, and aggregate all predictions, to arrive at 81 promising repurpose candidates.
Posted Content

Drug discovery with explainable artificial intelligence

TL;DR: This review summarizes the most prominent algorithmic concepts of explainable artificial intelligence, and dares a forecast of the future opportunities, potential applications, and remaining challenges.
References
More filters
Book ChapterDOI

I and J

Proceedings Article

Auto-Encoding Variational Bayes

TL;DR: A stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case is introduced.
Posted Content

Semi-Supervised Classification with Graph Convolutional Networks

TL;DR: A scalable approach for semi-supervised learning on graph-structured data that is based on an efficient variant of convolutional neural networks which operate directly on graphs which outperforms related methods by a significant margin.
Book ChapterDOI

Visualizing and Understanding Convolutional Networks

TL;DR: A novel visualization technique is introduced that gives insight into the function of intermediate feature layers and the operation of the classifier in large Convolutional Network models, used in a diagnostic role to find model architectures that outperform Krizhevsky et al on the ImageNet classification benchmark.
Proceedings ArticleDOI

"Why Should I Trust You?": Explaining the Predictions of Any Classifier

TL;DR: In this article, the authors propose LIME, a method to explain models by presenting representative individual predictions and their explanations in a non-redundant way, framing the task as a submodular optimization problem.
Related Papers (5)