scispace - formally typeset
Open AccessProceedings Article

Inductive Representation Learning on Large Graphs

TLDR
GraphSAGE as mentioned in this paper is a general, inductive framework that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings instead of training individual embedding for each node.
Abstract
Low-dimensional embeddings of nodes in large graphs have proved extremely useful in a variety of prediction tasks, from content recommendation to identifying protein functions. However, most existing approaches require that all nodes in the graph are present during training of the embeddings; these previous approaches are inherently transductive and do not naturally generalize to unseen nodes. Here we present GraphSAGE, a general, inductive framework that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings. Instead of training individual embeddings for each node, we learn a function that generates embeddings by sampling and aggregating features from a node's local neighborhood. Our algorithm outperforms strong baselines on three inductive node-classification benchmarks: we classify the category of unseen nodes in evolving information graphs based on citation and Reddit post data, and we show that our algorithm generalizes to completely unseen graphs using a multi-graph dataset of protein-protein interactions.

read more

Content maybe subject to copyright    Report

Citations
More filters
Posted Content

Graph Backdoor.

TL;DR: The effectiveness of GTA is demonstrated: for instance, on pre-trained, off-the-shelf GNNs, GTA attains over 99.2% attack success rate with merely less than 0.3% accuracy drop.
Posted Content

SimGNN: A Neural Network Approach to Fast Graph Similarity Computation

TL;DR: SimGNN as mentioned in this paper proposes a learnable embedding function that maps every graph into a vector, which provides a global summary of a graph, and a pairwise node comparison method to supplement the graph-level embeddings with fine-grained node-level information.
Posted Content

Batch Virtual Adversarial Training for Graph Convolutional Networks.

TL;DR: Two algorithms are proposed, sample-based and optimization-based BVAT, which are suitable to promote the smoothness of the model for graph-structured data by either finding virtual adversarial perturbations for a subset of nodes far from each other or generating virtual adversaries for all nodes with an optimization process.
Proceedings ArticleDOI

Reinforced Negative Sampling over Knowledge Graph for Recommendation

TL;DR: In this paper, a new negative sampling model, Knowledge Graph Policy Network (KGPolicy), is proposed to explore high-quality negative samples by conducting designed exploration operations, which navigates from the target positive interaction, adaptively receives knowledge-aware negative signals, and ultimately yields a potential negative item to train the recommender.
Proceedings ArticleDOI

Collaborative Motion Prediction via Neural Motion Message Passing

TL;DR: This work proposes neural motion message passing (NMMP) to explicitly model the interaction and learn representations for directed interactions between actors, and designs the motion prediction systems for two settings: the pedestrian setting and the joint pedestrian and vehicle setting.
Related Papers (5)