scispace - formally typeset
Open AccessPosted Content

Very Deep Graph Neural Networks Via Noise Regularisation.

Reads0
Chats0
TLDR
In this article, the authors train a deep GNN with up to 100 message passing steps and achieve state-of-the-art results on two challenging molecular property prediction benchmarks, Open Catalyst 2020 IS2RE and QM9.
Abstract
Graph Neural Networks (GNNs) perform learned message passing over an input graph, but conventional wisdom says performing more than handful of steps makes training difficult and does not yield improved performance. Here we show the contrary. We train a deep GNN with up to 100 message passing steps and achieve several state-of-the-art results on two challenging molecular property prediction benchmarks, Open Catalyst 2020 IS2RE and QM9. Our approach depends crucially on a novel but simple regularisation method, which we call ``Noisy Nodes'', in which we corrupt the input graph with noise and add an auxiliary node autoencoder loss if the task is graph property prediction. Our results show this regularisation method allows the model to monotonically improve in performance with increased message passing steps. Our work opens new opportunities for reaping the benefits of deep neural networks in the space of graph and other structured prediction problems.

read more

Citations
More filters
Posted Content

Large-scale graph representation learning with very deep GNNs and self-supervision

TL;DR: In this paper, a transductive node classifier powered by bootstrapping and a very deep (up to 50-layer) inductive graph regressor regularised by denoising objectives were used to achieve top-3 performance on both the MAG240M and PCQM4M benchmarks.
Posted Content

Simplifying approach to Node Classification in Graph Neural Networks.

TL;DR: In this paper, the authors decouple the node feature aggregation step and depth of graph neural network, and empirically analyze how different aggregated features play a role in prediction performance, and propose to use softmax as a regularizer and soft-selector of features aggregated from neighbors at different hop distances.
Posted Content

Evaluating Deep Graph Neural Networks.

TL;DR: Deep Graph Multi-Layer Perceptron (DGMLP) as mentioned in this paper is a powerful approach that helps guide deep GNN designs and achieves state-of-the-art node classification performance on various datasets.
Posted Content

Crystal Diffusion Variational Autoencoder for Periodic Material Generation

TL;DR: In this paper, the authors propose a crystal diffusion variational autoencoder (CDVAE) that captures the physical inductive bias of material stability by learning from the data distribution of stable materials.
References
More filters
Posted Content

Effective Training Strategies for Deep Graph Neural Networks.

TL;DR: The proposed NodeNorm regularizes deep GCNs by discouraging feature-wise correlation of hidden embeddings and increasing model smoothness with respect to input node features, and thus effectively reduces overfitting, enabling deep GNNs to compete with and even outperform shallow ones.
Posted Content

Bootstrapped Representation Learning on Graphs

TL;DR: Bootstrapped Graph Latents (BGRL) as discussed by the authors is a self-supervised graph representation method based on graph attentional encoder that achieves state-of-the-art results on several established benchmark datasets.
Posted Content

Learning Mesh-Based Simulation with Graph Networks

TL;DR: MeshGraphNets as mentioned in this paper is a framework for learning mesh-based simulations using graph neural networks, which can be trained to pass messages on a mesh graph and to adapt the mesh discretization during forward simulation.
Related Papers (5)