Open AccessProceedings Article
Inductive Representation Learning on Large Graphs
William L. Hamilton,Zhitao Ying,Jure Leskovec +2 more
- Vol. 30, pp 1024-1034
TLDR
GraphSAGE as mentioned in this paper is a general, inductive framework that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings instead of training individual embedding for each node.Abstract:
Low-dimensional embeddings of nodes in large graphs have proved extremely useful in a variety of prediction tasks, from content recommendation to identifying protein functions. However, most existing approaches require that all nodes in the graph are present during training of the embeddings; these previous approaches are inherently transductive and do not naturally generalize to unseen nodes. Here we present GraphSAGE, a general, inductive framework that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings. Instead of training individual embeddings for each node, we learn a function that generates embeddings by sampling and aggregating features from a node's local neighborhood. Our algorithm outperforms strong baselines on three inductive node-classification benchmarks: we classify the category of unseen nodes in evolving information graphs based on citation and Reddit post data, and we show that our algorithm generalizes to completely unseen graphs using a multi-graph dataset of protein-protein interactions.read more
Citations
More filters
Journal ArticleDOI
Multi-Stream Attention-Aware Graph Convolution Network for Video Salient Object Detection
TL;DR: In this article, a multi-stream attention-aware graph convolutional neural network (GCN) is proposed for video salient object detection, where a superpixel-level spatio-temporal graph is first constructed among multiple frame-pairs by exploiting the motion cues implied in the frame pairs.
Proceedings ArticleDOI
TP-GNN: A Graph Neural Network Framework for Tier Partitioning in Monolithic 3D ICs
TL;DR: TP-GNN, an unsupervised graph-learning-based tier partitioning framework, is proposed, which significantly improves the QoR of the state-of-the-art 3D implementation flows.
Posted Content
Does Unsupervised Architecture Representation Learning Help Neural Architecture Search
TL;DR: This paper showed that pre-training architecture representations using only neural architectures without their accuracies as labels significantly improves the downstream architecture search efficiency, and they visualize how unsupervised architecture representation learning better encourages neural architectures with similar connections and operators to cluster together.
Journal ArticleDOI
GraphAIR: Graph representation learning with neighborhood aggregation and interaction
TL;DR: This paper theoretically prove that coefficients of the neighborhood interacting terms are relatively small in current models, which explains why GCNs barely outperforms linear models, and presents a novel GraphAIR framework which models the neighborhood interaction in addition to neighborhood aggregation.
Posted Content
GraphCL: Contrastive Self-Supervised Learning of Graph Representations.
TL;DR: This work uses graph neural networks to produce two representations of the same node and leverage a contrastive learning loss to maximize agreement between them and demonstrates that this approach significantly outperforms the state-of-the-art in unsupervised learning on a number of node classification benchmarks.