Open AccessProceedings Article
Inductive Representation Learning on Large Graphs
William L. Hamilton,Zhitao Ying,Jure Leskovec +2 more
- Vol. 30, pp 1024-1034
TLDR
GraphSAGE as mentioned in this paper is a general, inductive framework that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings instead of training individual embedding for each node.Abstract:
Low-dimensional embeddings of nodes in large graphs have proved extremely useful in a variety of prediction tasks, from content recommendation to identifying protein functions. However, most existing approaches require that all nodes in the graph are present during training of the embeddings; these previous approaches are inherently transductive and do not naturally generalize to unseen nodes. Here we present GraphSAGE, a general, inductive framework that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings. Instead of training individual embeddings for each node, we learn a function that generates embeddings by sampling and aggregating features from a node's local neighborhood. Our algorithm outperforms strong baselines on three inductive node-classification benchmarks: we classify the category of unseen nodes in evolving information graphs based on citation and Reddit post data, and we show that our algorithm generalizes to completely unseen graphs using a multi-graph dataset of protein-protein interactions.read more
Citations
More filters
Posted Content
Measuring and Relieving the Over-smoothing Problem for Graph Neural Networks from the Topological View
TL;DR: Two methods to alleviate the over-smoothing issue of GNNs are proposed: MADReg which adds a MADGap-based regularizer to the training objective; AdaEdge which optimizes the graph topology based on the model predictions.
Posted Content
TUDataset: A collection of benchmark datasets for learning with graphs.
Christopher Morris,Nils M. Kriege,Franka Bause,Kristian Kersting,Petra Mutzel,Marion Neumann +5 more
TL;DR: The TUDataset for graph classification and regression is introduced, which consists of over 120 datasets of varying sizes from a wide range of applications and provides Python-based data loaders, kernel and graph neural network baseline implementations, and evaluation tools.
Proceedings ArticleDOI
Disentangling and Unifying Graph Convolutions for Skeleton-Based Action Recognition
TL;DR: A simple method to disentangle multi-scale graph convolutions and a unified spatial-temporal graph convolutional operator named G3D are presented and a powerful feature extractor named MS-G3D is developed based on which the model outperforms previous state-of-the-art methods on three large-scale datasets.
Proceedings ArticleDOI
Self-supervised Graph Learning for Recommendation
TL;DR: This work explores self-supervised learning on user-item graph, so as to improve the accuracy and robustness of GCNs for recommendation, and implements it on the state-of-the-art model LightGCN, which has the ability of automatically mining hard negatives.
Posted Content
Deep Graph Contrastive Representation Learning.
TL;DR: This paper proposes a novel framework for unsupervised graph representation learning by leveraging a contrastive objective at the node level, and generates two graph views by corruption and learns node representations by maximizing the agreement of node representations in these two views.