Open AccessProceedings Article
Inductive Representation Learning on Large Graphs
William L. Hamilton,Zhitao Ying,Jure Leskovec +2 more
- Vol. 30, pp 1024-1034
TLDR
GraphSAGE as mentioned in this paper is a general, inductive framework that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings instead of training individual embedding for each node.Abstract:
Low-dimensional embeddings of nodes in large graphs have proved extremely useful in a variety of prediction tasks, from content recommendation to identifying protein functions. However, most existing approaches require that all nodes in the graph are present during training of the embeddings; these previous approaches are inherently transductive and do not naturally generalize to unseen nodes. Here we present GraphSAGE, a general, inductive framework that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings. Instead of training individual embeddings for each node, we learn a function that generates embeddings by sampling and aggregating features from a node's local neighborhood. Our algorithm outperforms strong baselines on three inductive node-classification benchmarks: we classify the category of unseen nodes in evolving information graphs based on citation and Reddit post data, and we show that our algorithm generalizes to completely unseen graphs using a multi-graph dataset of protein-protein interactions.read more
Citations
More filters
Journal ArticleDOI
Sampling Methods for Efficient Training of Graph Convolutional Networks: A Survey
TL;DR: A comprehensive survey of sampling methods for efficient training of GCN can be found in this paper, where the authors categorize sampling methods based on the sampling mechanisms and provide a comprehensive comparison within each category.
Book ChapterDOI
GraphSVX: Shapley Value Explanations for Graph Neural Networks
TL;DR: GraphSVX as mentioned in this paper is a post hoc local model-agnostic explanation method specifically designed for GNNs, which captures the "fair" contribution of each feature and node towards the explained prediction by constructing a surrogate model on a perturbed dataset.
Proceedings ArticleDOI
DGCN: Diversified Recommendation with Graph Convolutional Networks
TL;DR: Zhang et al. as discussed by the authors proposed to perform rebalanced neighbor discovering, category-boosted negative sampling and adversarial learning on top of GCN to push the diversification to the upstream candidate generation stage, with the help of graph convolutional networks.
Posted Content
LaneRCNN: Distributed Representations for Graph-Centric Motion Forecasting.
TL;DR: LaneRCNN as mentioned in this paper learns a local lane graph representation per actor to encode its past motions and the local map topology, and further develops an interaction module which permits efficient message passing among local graph representations within a shared global lane graph.
Posted Content
Graph Convolution for Multimodal Information Extraction from Visually Rich Documents.
TL;DR: Wang et al. as mentioned in this paper introduced a graph convolution based model to combine textual and visual information presented in VRDs, which is trained to summarize the context of a text segment in the document, and further combined with text embeddings for entity extraction.