Open AccessProceedings Article
Inductive Representation Learning on Large Graphs
William L. Hamilton,Zhitao Ying,Jure Leskovec +2 more
- Vol. 30, pp 1024-1034
TLDR
GraphSAGE as mentioned in this paper is a general, inductive framework that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings instead of training individual embedding for each node.Abstract:
Low-dimensional embeddings of nodes in large graphs have proved extremely useful in a variety of prediction tasks, from content recommendation to identifying protein functions. However, most existing approaches require that all nodes in the graph are present during training of the embeddings; these previous approaches are inherently transductive and do not naturally generalize to unseen nodes. Here we present GraphSAGE, a general, inductive framework that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings. Instead of training individual embeddings for each node, we learn a function that generates embeddings by sampling and aggregating features from a node's local neighborhood. Our algorithm outperforms strong baselines on three inductive node-classification benchmarks: we classify the category of unseen nodes in evolving information graphs based on citation and Reddit post data, and we show that our algorithm generalizes to completely unseen graphs using a multi-graph dataset of protein-protein interactions.read more
Citations
More filters
Journal ArticleDOI
Learning on Attribute-Missing Graphs
TL;DR: This article makes a shared-latent space assumption on graphs and develops a novel distribution matching-based GNN called structure-attribute transformer (SAT) for attribute-missing graphs that shows better performance than other methods on both link prediction and node attribute completion tasks.
Proceedings Article
Inductive representation learning on temporal graphs
TL;DR: In this paper, the authors propose the temporal graph attention (TGAT) layer to aggregate temporal-topological neighborhood features as well as learning time-feature interactions, which can inductively infer embeddings for both new and observed nodes whenever the graph evolves.
Journal ArticleDOI
Hierarchical Representation Learning in Graph Neural Networks With Node Decimation Pooling
TL;DR: The Node Decimation Pooling (NDP), a pooling operator for GNNs that generates coarser graphs while preserving the overall graph topology, is proposed and it is shown that it is possible to remove many edges without significantly altering the graph structure.
Proceedings ArticleDOI
BoostGCN: A Framework for Optimizing GCN Inference on FPGA
TL;DR: BoostGCN as discussed by the authors proposes a hardware-aware Partition-Centric Feature Aggregation (PCFA) scheme that leverages 3-D partitioning with the vertex-centric computing paradigm.
Proceedings ArticleDOI
Enhanced Graph Learning for Collaborative Filtering via Mutual Information Maximization
TL;DR: This article proposed an enhanced graph learning network EGLN approach for collaborative filtering via mutual information maximization (EGLN) to better learn enhanced graph structure for CF and designed a local-global consistency optimization function to capture the global properties of the adaptive graph learning process.