scispace - formally typeset
Open AccessPosted Content

Very Deep Graph Neural Networks Via Noise Regularisation.

Reads0
Chats0
TLDR
In this article, the authors train a deep GNN with up to 100 message passing steps and achieve state-of-the-art results on two challenging molecular property prediction benchmarks, Open Catalyst 2020 IS2RE and QM9.
Abstract
Graph Neural Networks (GNNs) perform learned message passing over an input graph, but conventional wisdom says performing more than handful of steps makes training difficult and does not yield improved performance. Here we show the contrary. We train a deep GNN with up to 100 message passing steps and achieve several state-of-the-art results on two challenging molecular property prediction benchmarks, Open Catalyst 2020 IS2RE and QM9. Our approach depends crucially on a novel but simple regularisation method, which we call ``Noisy Nodes'', in which we corrupt the input graph with noise and add an auxiliary node autoencoder loss if the task is graph property prediction. Our results show this regularisation method allows the model to monotonically improve in performance with increased message passing steps. Our work opens new opportunities for reaping the benefits of deep neural networks in the space of graph and other structured prediction problems.

read more

Citations
More filters
Posted Content

Large-scale graph representation learning with very deep GNNs and self-supervision

TL;DR: In this paper, a transductive node classifier powered by bootstrapping and a very deep (up to 50-layer) inductive graph regressor regularised by denoising objectives were used to achieve top-3 performance on both the MAG240M and PCQM4M benchmarks.
Posted Content

Simplifying approach to Node Classification in Graph Neural Networks.

TL;DR: In this paper, the authors decouple the node feature aggregation step and depth of graph neural network, and empirically analyze how different aggregated features play a role in prediction performance, and propose to use softmax as a regularizer and soft-selector of features aggregated from neighbors at different hop distances.
Posted Content

Evaluating Deep Graph Neural Networks.

TL;DR: Deep Graph Multi-Layer Perceptron (DGMLP) as mentioned in this paper is a powerful approach that helps guide deep GNN designs and achieves state-of-the-art node classification performance on various datasets.
Posted Content

Crystal Diffusion Variational Autoencoder for Periodic Material Generation

TL;DR: In this paper, the authors propose a crystal diffusion variational autoencoder (CDVAE) that captures the physical inductive bias of material stability by learning from the data distribution of stable materials.
References
More filters
Proceedings Article

Cormorant: Covariant Molecular Neural Networks

TL;DR: Cormorant as mentioned in this paper is a rotationally covariant neural network architecture for learning the behavior and properties of complex many-body physical systems, which can be applied to molecular systems with two goals: learning atomic potential energy surfaces for use in molecular Dynamics simulations, and learning ground state properties of molecules calculated by Density Functional Theory.
Journal ArticleDOI

Unveiling the predictive power of static structure in glassy systems

TL;DR: This work determines the long-time evolution of a glassy system solely from the initial particle positions and without any handcrafted features, using graph neural networks as a powerful model, and shows that this method outperforms current state-of-the-art methods, generalizing over a wide range of temperatures, pressures and densities.
Posted Content

Interaction Networks for Learning about Objects, Relations and Physics

TL;DR: The interaction network as mentioned in this paper is a model that can reason about how objects in complex systems interact, supporting dynamical predictions, as well as inferences about the abstract properties of the system.
Posted Content

DeeperGCN: All You Need to Train Deeper GCNs

TL;DR: Extensive experiments on Open Graph Benchmark show DeeperGCN significantly boosts performance over the state-of-the-art on the large scale graph learning tasks of node property prediction and graph property prediction.
Proceedings Article

Strategies for Pre-training Graph Neural Networks

TL;DR: A new strategy and self-supervised methods for pre-training Graph Neural Networks (GNNs) that avoids negative transfer and improves generalization significantly across downstream tasks, leading up to 9.4% absolute improvements in ROC-AUC over non-pre-trained models and achieving state-of-the-art performance for molecular property prediction and protein function prediction.
Related Papers (5)