scispace - formally typeset
Open AccessProceedings Article

Graph Structure of Neural Networks

TLDR
In this paper, the authors developed a graph-based representation of neural networks called relational graph, where layers of neural network computation correspond to rounds of message exchange along the graph structure. And they showed that a "sweet spot" of relational graphs leads to neural networks with significantly improved predictive performance.
Abstract
Neural networks are often represented as graphs of connections between neurons. However, despite their wide use, there is currently little understanding of the relationship between the graph structure of the neural network and its predictive performance. Here we systematically investigate how does the graph structure of neural networks affect their predictive performance. To this end, we develop a novel graph-based representation of neural networks called relational graph, where layers of neural network computation correspond to rounds of message exchange along the graph structure. Using this representation we show that: (1) a "sweet spot" of relational graphs leads to neural networks with significantly improved predictive performance; (2) neural network's performance is approximately a smooth function of the clustering coefficient and average path length of its relational graph; (3) our findings are consistent across many different tasks and datasets; (4) the sweet spot can be identified efficiently; (5) top-performing neural networks have graph structure surprisingly similar to those of real biological neural networks. Our work opens new directions for the design of neural architectures and the understanding on neural networks in general.

read more

Content maybe subject to copyright    Report

Citations
More filters
Posted Content

Design Space for Graph Neural Networks

TL;DR: This work defines and systematically study the architectural design space for GNNs which consists of 315,000 different designs over 32 different predictive tasks, and offers a principled and scalable approach to transition from studying individual GNN designs for specific tasks, to systematically studying the GNN design space and the task space.
Journal ArticleDOI

On Interpretability of Artificial Neural Networks: A Survey

TL;DR: In this article, a taxonomy for interpretability of DNNs is proposed, as well as applications of interpretability in medicine and future research directions, such as in relation to fuzzy logic and brain science.
Posted Content

On Interpretability of Artificial Neural Networks: A Survey

TL;DR: A simple but comprehensive taxonomy for interpretability is proposed, systematically review recent studies on interpretability of neural networks, describe applications of interpretability in medicine, and discuss future research directions, such as in relation to fuzzy logic and brain science.
Posted Content

Handling Missing Data with Graph Representation Learning

TL;DR: GRAPE is proposed, a graph-based framework for feature imputation as well as label prediction that yields 20% lower mean absolute error for imputation tasks and 10% lower for label prediction tasks, compared with existing state-of-the-art methods.
Posted Content

EdgeNets:Edge Varying Graph Neural Networks

TL;DR: A general framework that unifies state-of-the-art graph neural networks (GNNs) through the concept of EdgeNet is put forth and it is shown that GATs are GCNNs on a graph that is learned from the features, which opens the doors to develop alternative attention mechanisms for improving discriminatory power.
Trending Questions (1)
How can graph neural networks be used to improve the accuracy of thermal predictions?

The provided paper does not discuss the use of graph neural networks to improve the accuracy of thermal predictions.