Open AccessProceedings Article
Inductive Representation Learning on Large Graphs
William L. Hamilton,Zhitao Ying,Jure Leskovec +2 more
- Vol. 30, pp 1024-1034
TLDR
GraphSAGE as mentioned in this paper is a general, inductive framework that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings instead of training individual embedding for each node.Abstract:
Low-dimensional embeddings of nodes in large graphs have proved extremely useful in a variety of prediction tasks, from content recommendation to identifying protein functions. However, most existing approaches require that all nodes in the graph are present during training of the embeddings; these previous approaches are inherently transductive and do not naturally generalize to unseen nodes. Here we present GraphSAGE, a general, inductive framework that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings. Instead of training individual embeddings for each node, we learn a function that generates embeddings by sampling and aggregating features from a node's local neighborhood. Our algorithm outperforms strong baselines on three inductive node-classification benchmarks: we classify the category of unseen nodes in evolving information graphs based on citation and Reddit post data, and we show that our algorithm generalizes to completely unseen graphs using a multi-graph dataset of protein-protein interactions.read more
Citations
More filters
Journal ArticleDOI
EnGN: A High-Throughput and Energy-Efficient Accelerator for Large Graph Neural Networks
TL;DR: EnGN as discussed by the authors proposes a specialized accelerator architecture to accelerate the three key stages of GNN propagation, which is abstracted as common computing patterns shared by typical GNNs, and uses graph tiling strategy to fit large graphs into EnGN and make good use of the hierarchical onchip buffers through adaptive computation reordering and tile scheduling.
Journal ArticleDOI
Characterizing and Understanding GCNs on GPU
TL;DR: In this paper, the authors characterize GCN workloads at inference stage and explore GCN models on NVIDIA V100 GPU, and propose several useful guidelines for both software optimization and hardware optimization for the efficient execution of GCNs on GPU.
Posted Content
Label Efficient Semi-Supervised Learning via Graph Filtering
TL;DR: In this paper, a graph filtering framework is proposed to inject graph similarity into data features by taking them as signals on the graph and applying a low-pass graph filter to extract useful data representations for classification.
Posted Content
Coloring graph neural networks for node disambiguation
TL;DR: This paper introduces a graph neural network called Colored Local Iterative Procedure (CLIP) that uses colors to disambiguate identical node attributes, and shows that this representation is a universal approximator of continuous functions on graphs with node attributes.
Posted Content
Simple and Deep Graph Convolutional Networks
TL;DR: This article proposed GCNII, an extension of the vanilla GCN model with two simple yet effective techniques: initial residual and identity mapping, and provided theoretical and empirical evidence that the two techniques effectively relieves the problem of over-smoothing.