Open AccessProceedings Article
Inductive Representation Learning on Large Graphs
William L. Hamilton,Zhitao Ying,Jure Leskovec +2 more
- Vol. 30, pp 1024-1034
TLDR
GraphSAGE as mentioned in this paper is a general, inductive framework that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings instead of training individual embedding for each node.Abstract:
Low-dimensional embeddings of nodes in large graphs have proved extremely useful in a variety of prediction tasks, from content recommendation to identifying protein functions. However, most existing approaches require that all nodes in the graph are present during training of the embeddings; these previous approaches are inherently transductive and do not naturally generalize to unseen nodes. Here we present GraphSAGE, a general, inductive framework that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings. Instead of training individual embeddings for each node, we learn a function that generates embeddings by sampling and aggregating features from a node's local neighborhood. Our algorithm outperforms strong baselines on three inductive node-classification benchmarks: we classify the category of unseen nodes in evolving information graphs based on citation and Reddit post data, and we show that our algorithm generalizes to completely unseen graphs using a multi-graph dataset of protein-protein interactions.read more
Citations
More filters
Journal ArticleDOI
Graph representation learning for road type classification
TL;DR: In this article, a learning-based approach to graph representations of road networks employing state-of-the-art graph convolutional neural networks is presented. But, the authors show that the highly representative edge features can still be integrated into such networks by applying a line graph transformation.
Book ChapterDOI
TemporalGAT: Attention-Based Dynamic Graph Representation Learning
Ahmed Fathy,Kan Li +1 more
TL;DR: A deep attention model is proposed to learn low-dimensional feature representations which preserves the graph structure and features among series of graph snapshots over time which is competitive against various state-of-the-art methods.
Journal ArticleDOI
Grain: Improving Data Efficiency of Graph Neural Networks via Diversified Influence Maximization
TL;DR: Grain this paper is an efficient framework that connects data selection in GNNs with social influence maximization by exploiting the common patterns of GNN, and introduces a novel feature propagation concept, a diversified influence maximisation objective with novel influence and diversity functions, and a greedy algorithm with an approximation guarantee.
Posted Content
Tripartite Heterogeneous Graph Propagation for Large-scale Social Recommendation
Kyung-Min Kim,Dong-Hyun Kwak,Hanock Kwak,Young-Jin Park,Sangkwon Sim,Jae-Han Cho,Min-Kyu Kim,Jihun Kwon,Nako Sung,Jung-Woo Ha +9 more
TL;DR: Heterogeneous Graph Propagation (HGP) as mentioned in this paper uses a group-user-item tripartite graph as input to reduce the number of edges and the complexity of paths in a social graph.
Posted Content
Understanding Negative Sampling in Graph Representation Learning
TL;DR: In this paper, the role of negative sampling from the perspective of both objective and risk is analyzed, and it is shown that negative sampling is as important as positive sampling in determining the optimization objective and the resulted variance.