scispace - formally typeset
Open AccessProceedings Article

Inductive Representation Learning on Large Graphs

TLDR
GraphSAGE as mentioned in this paper is a general, inductive framework that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings instead of training individual embedding for each node.
Abstract
Low-dimensional embeddings of nodes in large graphs have proved extremely useful in a variety of prediction tasks, from content recommendation to identifying protein functions. However, most existing approaches require that all nodes in the graph are present during training of the embeddings; these previous approaches are inherently transductive and do not naturally generalize to unseen nodes. Here we present GraphSAGE, a general, inductive framework that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings. Instead of training individual embeddings for each node, we learn a function that generates embeddings by sampling and aggregating features from a node's local neighborhood. Our algorithm outperforms strong baselines on three inductive node-classification benchmarks: we classify the category of unseen nodes in evolving information graphs based on citation and Reddit post data, and we show that our algorithm generalizes to completely unseen graphs using a multi-graph dataset of protein-protein interactions.

read more

Content maybe subject to copyright    Report

Citations
More filters
Posted Content

Network Representation Learning: Consolidation and Renewed Bearing

TL;DR: This systematic yet comprehensive experimental survey benchmarks several popular network representation learning methods operating on two key tasks: link prediction and node classification and finds that MNMF, a community preserving embedding method, is the most competitive method for the link prediction task.
Posted Content

Revisiting Graph Convolutional Network on Semi-Supervised Node Classification from an Optimization Perspective.

TL;DR: A universal theoretical framework of GCN is established from an optimization perspective and a novel convolutional kernel named GCN+ is derived which has lower parameter amount while relieving the over-smoothing inherently.
Posted Content

Understanding Image Retrieval Re-Ranking: A Graph Neural Network Perspective.

TL;DR: This paper revisits re-ranking and demonstrates that it can be reformulated as a high-parallelism Graph Neural Network (GNN) function, and argues that the first phase equals building the k-nearest neighbor graph, while the second phase can be viewed as spreading the message within the graph.
Journal ArticleDOI

Robust graph convolutional networks with directional graph adversarial training

TL;DR: A graph-specific AT method, Directional Graph Adversarial Training (DGAT), which incorporates the graph structure into the adversarial process and automatically identifies the impact of perturbations from neighbor nodes, and introduces an adversarial regularizer to defend the worst-case perturbation.
Posted Content

Constant Curvature Graph Convolutional Networks

TL;DR: In this article, the authors propose mathematically grounded generalizations of graph convolutional networks (GCN) to (products of) constant curvature spaces, by introducing a unified formalism that can interpolate smoothly between all geometries of constant curvatures.
Related Papers (5)