scispace - formally typeset
Open AccessPosted Content

GraphSAINT: Graph Sampling Based Inductive Learning Method

Reads0
Chats0
TLDR
GraphSAINT is proposed, a graph sampling based inductive learning method that improves training efficiency in a fundamentally different way and can decouple the sampling process from the forward and backward propagation of training, and extend GraphSAINT with other graph samplers and GCN variants.
Abstract
Graph Convolutional Networks (GCNs) are powerful models for learning representations of attributed graphs. To scale GCNs to large graphs, state-of-the-art methods use various layer sampling techniques to alleviate the "neighbor explosion" problem during minibatch training. We propose GraphSAINT, a graph sampling based inductive learning method that improves training efficiency and accuracy in a fundamentally different way. By changing perspective, GraphSAINT constructs minibatches by sampling the training graph, rather than the nodes or edges across GCN layers. Each iteration, a complete GCN is built from the properly sampled subgraph. Thus, we ensure fixed number of well-connected nodes in all layers. We further propose normalization technique to eliminate bias, and sampling algorithms for variance reduction. Importantly, we can decouple the sampling from the forward and backward propagation, and extend GraphSAINT with many architecture variants (e.g., graph attention, jumping connection). GraphSAINT demonstrates superior performance in both accuracy and training time on five large graphs, and achieves new state-of-the-art F1 scores for PPI (0.995) and Reddit (0.970).

read more

Citations
More filters
Journal ArticleDOI

Graph Convolutional Networks for Hyperspectral Image Classification

TL;DR: In this paper, a mini-batch graph convolutional network (called miniGCN) is proposed for hyperspectral image classification, which allows to train large-scale GCNs in a minibatch fashion.
Journal ArticleDOI

A Metaverse: Taxonomy, Components, Applications, and Open Challenges

TL;DR: This paper divides the concepts and essential techniques necessary for realizing the Metaverse into three components (i.e., hardware, software, and contents) and three approaches and describes essential methods based on three components and techniques to Metaverse’s representative Ready Player One, Roblox, and Facebook research in the domain of films, games, and studies.
Posted Content

SIGN: Scalable Inception Graph Neural Networks

TL;DR: This paper proposes a new, efficient and scalable graph deep learning architecture which sidesteps the need for graph sampling by using graph convolutional filters of different size that are amenable to efficient precomputation, allowing extremely fast training and inference.
Posted Content

Masked Label Prediction: Unified Message Passing Model for Semi-Supervised Classification

TL;DR: A novel Unified Message Passaging Model (UniMP) is proposed that conceptually unifies feature propagation and label propagation and is empirically powerful and obtains new state-of-the-art semi-supervised classification results in Open Graph Benchmark.
Proceedings ArticleDOI

How to Find Your Friendly Neighborhood: Graph Attention Design with Self-Supervision

Dongkwan Kim, +1 more
TL;DR: A self-supervised graph attention network (SuperGAT), an improved graph attention model for noisy graphs that generalizes across 15 datasets of them, and the models designed by recipe show improved performance over baselines.
References
More filters
Proceedings ArticleDOI

Deep Residual Learning for Image Recognition

TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Journal ArticleDOI

Long short-term memory

TL;DR: A novel, efficient, gradient based method called long short-term memory (LSTM) is introduced, which can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units.
Proceedings ArticleDOI

Densely Connected Convolutional Networks

TL;DR: DenseNet as mentioned in this paper proposes to connect each layer to every other layer in a feed-forward fashion, which can alleviate the vanishing gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters.
Posted Content

Adam: A Method for Stochastic Optimization

TL;DR: In this article, the adaptive estimates of lower-order moments are used for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimate of lowerorder moments.
Related Papers (5)