scispace - formally typeset
Open AccessProceedings Article

Inductive Representation Learning on Large Graphs

TLDR
GraphSAGE as mentioned in this paper is a general, inductive framework that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings instead of training individual embedding for each node.
Abstract
Low-dimensional embeddings of nodes in large graphs have proved extremely useful in a variety of prediction tasks, from content recommendation to identifying protein functions. However, most existing approaches require that all nodes in the graph are present during training of the embeddings; these previous approaches are inherently transductive and do not naturally generalize to unseen nodes. Here we present GraphSAGE, a general, inductive framework that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings. Instead of training individual embeddings for each node, we learn a function that generates embeddings by sampling and aggregating features from a node's local neighborhood. Our algorithm outperforms strong baselines on three inductive node-classification benchmarks: we classify the category of unseen nodes in evolving information graphs based on citation and Reddit post data, and we show that our algorithm generalizes to completely unseen graphs using a multi-graph dataset of protein-protein interactions.

read more

Content maybe subject to copyright    Report

Citations
More filters
Posted Content

Data Augmentation for Graph Neural Networks

TL;DR: This work shows that neural edge predictors can effectively encode class-homophilic structure to promote intra- class edges and demote inter-class edges in given graph structure, and introduces the GAug graph data augmentation framework, which leverages these insights to improve performance in GNN-based node classification via edge prediction.
Proceedings ArticleDOI

Signed Graph Convolutional Networks

TL;DR: A dedicated and principled effort that utilizes balance theory to correctly aggregate and propagate the information across layers of a signed GCN model is proposed and empirical experiments comparing the proposed signed GCNs against state-of-the-art baselines for learning node representations in signed networks are performed.
Proceedings ArticleDOI

Disentangled Graph Collaborative Filtering

TL;DR: A new model, Disentangled Graph Collaborative Filtering (DGCF), is devised to disentangle user intents and yield disentangled representations by modeling a distribution over intents for each user-item interaction, and iteratively refine the intent-aware interaction graphs and representations.
Posted Content

GPT-GNN: Generative Pre-Training of Graph Neural Networks

TL;DR: The GPT-GNN framework to initialize GNNs by generative pre-training introduces a self-supervised attributed graph generation task to pre-train a GNN so that it can capture the structural and semantic properties of the graph.
Posted Content

Machine Learning on Graphs: A Model and Comprehensive Taxonomy

TL;DR: A comprehensive taxonomy of representation learning methods for graph-structured data is proposed, aiming to unify several disparate bodies of work and provide a solid foundation for understanding the intuition behind these methods, and enables future research in the area.
Related Papers (5)