scispace - formally typeset
Open AccessProceedings Article

Inductive Representation Learning on Large Graphs

TLDR
GraphSAGE as mentioned in this paper is a general, inductive framework that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings instead of training individual embedding for each node.
Abstract
Low-dimensional embeddings of nodes in large graphs have proved extremely useful in a variety of prediction tasks, from content recommendation to identifying protein functions. However, most existing approaches require that all nodes in the graph are present during training of the embeddings; these previous approaches are inherently transductive and do not naturally generalize to unseen nodes. Here we present GraphSAGE, a general, inductive framework that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings. Instead of training individual embeddings for each node, we learn a function that generates embeddings by sampling and aggregating features from a node's local neighborhood. Our algorithm outperforms strong baselines on three inductive node-classification benchmarks: we classify the category of unseen nodes in evolving information graphs based on citation and Reddit post data, and we show that our algorithm generalizes to completely unseen graphs using a multi-graph dataset of protein-protein interactions.

read more

Content maybe subject to copyright    Report

Citations
More filters
Proceedings ArticleDOI

Indirect Adversarial Attacks via Poisoning Neighbors for Graph Convolutional Networks

TL;DR: It is demonstrated that the node classifier can be deceived with high-confidence by poisoning just a single node even two-hops or more far from the target, which can be used as a benchmark in future defense attempts to develop graph convolutional neural networks with having adversary robustness.
Proceedings ArticleDOI

Should Graph Convolution Trust Neighbors? A Simple Causal Inference Method

TL;DR: This work investigates whether a GCN should trust the local structure of a testing node when predicting its label, and analyzes the working mechanism of GCN with causal graph, estimating the causal effect of a node's local structure for the prediction.
Proceedings ArticleDOI

Adversarial Label-Flipping Attack and Defense for Graph Neural Networks

TL;DR: In this paper, the authors proposed an effective attack model LafAK based on approximated closed form of GNNs and continuous surrogate of non-differentiable objective, efficiently generating attacks via gradient-based optimizers.
Posted Content

Simple and Effective Graph Autoencoders with One-Hop Linear Models

TL;DR: It is shown that GCN encoders are actually unnecessarily complex for many applications, and proposed to replace them by significantly simpler and more interpretable linear models w.r.t. the direct neighborhood (one-hop) adjacency matrix of the graph, involving fewer operations, fewer parameters and no activation function.
Posted Content

Identity-aware Graph Neural Networks

TL;DR: The identity-aware graph neural network (ID-GNN) as mentioned in this paper extends existing GNN architectures by inductively considering nodes' identities during message passing to embed a given node, where different sets of parameters are applied to the center node than to other surrounding nodes in the ego network.
Related Papers (5)