scispace - formally typeset
Open AccessProceedings Article

Inductive Representation Learning on Large Graphs

TLDR
GraphSAGE as mentioned in this paper is a general, inductive framework that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings instead of training individual embedding for each node.
Abstract
Low-dimensional embeddings of nodes in large graphs have proved extremely useful in a variety of prediction tasks, from content recommendation to identifying protein functions. However, most existing approaches require that all nodes in the graph are present during training of the embeddings; these previous approaches are inherently transductive and do not naturally generalize to unseen nodes. Here we present GraphSAGE, a general, inductive framework that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings. Instead of training individual embeddings for each node, we learn a function that generates embeddings by sampling and aggregating features from a node's local neighborhood. Our algorithm outperforms strong baselines on three inductive node-classification benchmarks: we classify the category of unseen nodes in evolving information graphs based on citation and Reddit post data, and we show that our algorithm generalizes to completely unseen graphs using a multi-graph dataset of protein-protein interactions.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

Hierarchical graph attention networks for semi-supervised node classification

TL;DR: A hierarchical graph attention network (HGAT) for semi-supervised node classification that employs a hierarchical mechanism for the learning of node features and can capture global structure information by increasing the receptive field, as well as the effective transfer of nodes features.
Proceedings ArticleDOI

Lane-Attention: Predicting Vehicles’ Moving Trajectories by Learning Their Attention Over Lanes

TL;DR: In this paper, a graph neural network (GNN) is used to predict the future trajectory of a vehicle by leveraging attention mechanisms along with LSTM networks, which learns the relation between a driver's intention and the vehicle's changing positions relative to road infrastructures and uses it to guide the prediction.
Journal ArticleDOI

struc2gauss : Structural role preserving network embedding via Gaussian embedding

TL;DR: A new NE framework is proposed, struc2gauss, which learns node representations in the space of Gaussian distributions and performs network embedding based on global structural information and outperforms other methods on the structure-based clustering and classification task and provides more information on uncertainties of node representations.
Posted Content

Basket Recommendation with Multi-Intent Translation Graph Neural Network

TL;DR: A new framework named as Multi-Intent Translation Graph Neural Network (MITGNN) is proposed, which models T intents as tail entities translated from one corresponding basket embedding via T relation vectors and propagates multiple intents across the authors' defined basket graph.
Proceedings ArticleDOI

Medical Entity Disambiguation Using Graph Neural Networks

TL;DR: Zhang et al. as mentioned in this paper introduced ED-GNN based on three representative graph neural networks (GraphSAGE, R-GCN, and MAGNN) for medical entity disambiguation.
Related Papers (5)