scispace - formally typeset
Open AccessProceedings Article

Inductive Representation Learning on Large Graphs

TLDR
GraphSAGE as mentioned in this paper is a general, inductive framework that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings instead of training individual embedding for each node.
Abstract
Low-dimensional embeddings of nodes in large graphs have proved extremely useful in a variety of prediction tasks, from content recommendation to identifying protein functions. However, most existing approaches require that all nodes in the graph are present during training of the embeddings; these previous approaches are inherently transductive and do not naturally generalize to unseen nodes. Here we present GraphSAGE, a general, inductive framework that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings. Instead of training individual embeddings for each node, we learn a function that generates embeddings by sampling and aggregating features from a node's local neighborhood. Our algorithm outperforms strong baselines on three inductive node-classification benchmarks: we classify the category of unseen nodes in evolving information graphs based on citation and Reddit post data, and we show that our algorithm generalizes to completely unseen graphs using a multi-graph dataset of protein-protein interactions.

read more

Content maybe subject to copyright    Report

Citations
More filters
Proceedings ArticleDOI

Session-based Social Recommendation via Dynamic Graph Attention Networks

TL;DR: In this article, the authors proposed a recommender system for online communities based on a dynamic graph-attention neural network, which dynamically infers the influencers based on users' current interests.
Journal ArticleDOI

Graph Learning: A Survey

TL;DR: A comprehensive overview on the state-of-the-art of graph learning can be found in this paper, where four categories of existing graph learning methods, including graph signal processing, matrix factorization, random walk, and deep learning are reviewed.
Proceedings ArticleDOI

Fi-GNN: Modeling Feature Interactions via Graph Neural Networks for CTR Prediction

TL;DR: Fi-GNN as mentioned in this paper proposes to represent the multi-field features in a graph structure intuitively, where each node corresponds to a feature field and different fields can interact through edges.
Posted Content

Learning to Drop: Robust Graph Neural Network via Topological Denoising

TL;DR: PTDNet is proposed, a parameterized topological denoising network, to improve the robustness and generalization performance of GNNs by learning to drop task-irrelevant edges and can be used as a key component in GNN models to improve their performances on various tasks.
Journal ArticleDOI

Knowledge Graph Completion: A Review

TL;DR: Different KGC technologies are introduced, including their advantages, disadvantages and applicable fields, and the main challenges and problems faced by the KGC are discussed.
Related Papers (5)