scispace - formally typeset
Open AccessProceedings Article

Inductive Representation Learning on Large Graphs

TLDR
GraphSAGE as mentioned in this paper is a general, inductive framework that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings instead of training individual embedding for each node.
Abstract
Low-dimensional embeddings of nodes in large graphs have proved extremely useful in a variety of prediction tasks, from content recommendation to identifying protein functions. However, most existing approaches require that all nodes in the graph are present during training of the embeddings; these previous approaches are inherently transductive and do not naturally generalize to unseen nodes. Here we present GraphSAGE, a general, inductive framework that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings. Instead of training individual embeddings for each node, we learn a function that generates embeddings by sampling and aggregating features from a node's local neighborhood. Our algorithm outperforms strong baselines on three inductive node-classification benchmarks: we classify the category of unseen nodes in evolving information graphs based on citation and Reddit post data, and we show that our algorithm generalizes to completely unseen graphs using a multi-graph dataset of protein-protein interactions.

read more

Content maybe subject to copyright    Report

Citations
More filters
Posted Content

Approximation Ratios of Graph Neural Networks for Combinatorial Problems

TL;DR: This paper is the first to elucidate approximation ratios of GNNs for combinatorial problems and proves that adding coloring or weak-coloring to each node feature improves these approximation ratios, indicating that preprocessing and feature engineering theoretically strengthen model capabilities.
Posted Content

Graph Meta Learning via Local Subgraphs

TL;DR: G-Meta, a novel meta-learning algorithm for graphs that uses local subgraphs to transfer subgraph-specific information and learn transferable knowledge faster via meta gradients, outperforms existing methods by up to 16.3%.
Posted Content

PU-GCN: Point Cloud Upsampling using Graph Convolutional Networks

TL;DR: A novel model called NodeShuffle is proposed, which uses a Graph Convolutional Network (GCN) to better encode local point information from point neighborhoods to improve state-of-the-art upsampling methods.
Posted Content

A Unified View on Graph Neural Networks as Graph Signal Denoising

TL;DR: It is established mathematically that the aggregation processes in a group of representative GNN models including GCN, GAT, PPNP, and APPNP can be regarded as solving a graph denoising problem with a smoothness assumption.
Posted Content

An Attention-based Graph Neural Network for Heterogeneous Structural Learning.

TL;DR: A novel Heterogeneous Graph Structural Attention Neural Network (HetSANN) is proposed to directly encode structural information of HIN without meta-path and achieve more informative representations.
Related Papers (5)