Open AccessProceedings Article
Inductive Representation Learning on Large Graphs
William L. Hamilton,Zhitao Ying,Jure Leskovec +2 more
- Vol. 30, pp 1024-1034
TLDR
GraphSAGE as mentioned in this paper is a general, inductive framework that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings instead of training individual embedding for each node.Abstract:
Low-dimensional embeddings of nodes in large graphs have proved extremely useful in a variety of prediction tasks, from content recommendation to identifying protein functions. However, most existing approaches require that all nodes in the graph are present during training of the embeddings; these previous approaches are inherently transductive and do not naturally generalize to unseen nodes. Here we present GraphSAGE, a general, inductive framework that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings. Instead of training individual embeddings for each node, we learn a function that generates embeddings by sampling and aggregating features from a node's local neighborhood. Our algorithm outperforms strong baselines on three inductive node-classification benchmarks: we classify the category of unseen nodes in evolving information graphs based on citation and Reddit post data, and we show that our algorithm generalizes to completely unseen graphs using a multi-graph dataset of protein-protein interactions.read more
Citations
More filters
Posted Content
An Evaluation of Edge TPU Accelerators for Convolutional Neural Networks.
TL;DR: In this paper, the major micro-architectural details of edge TPUs are discussed and three classes of Edge TPU accelerators are extensively evaluated across different computing ecosystems, that are either currently deployed in Google products or are the product pipeline.
Proceedings ArticleDOI
Heterogeneous Dynamic Graph Attention Network
TL;DR: A heterogeneous dynamic graph attention network (HDGAN), which attempts to use the attention mechanism to take the heterogeneity and dynamics of the network into account at the same time, so as to better learn network embedding.
Posted Content
Utilizing Edge Features in Graph Neural Networks via Variational Information Maximization
TL;DR: This work proposes the information maximizing graph neural networks (IGNN), which maximizes the mutual information between edge states and transform parameters via a variational approach and achieves the state-of-the-art performance on multiple tasks including quantum chemistry regression on QM 9 dataset, generalization capability from QM9 to larger molecular graphs, and prediction of molecular bioactivities relevant for drug discovery.
Posted Content
Nonlinear Higher-Order Label Spreading
TL;DR: This work proves convergence of the nonlinear higher-order label spreading algorithm to the global solution of an interpretable semi-supervised loss function and demonstrates the efficiency and efficacy of the approach on a variety of point cloud and network datasets.
Posted Content
Understanding Graph Neural Networks from Graph Signal Denoising Perspectives
TL;DR: This paper aims to provide a theoretical framework to understand GNNs, specifically, spectral graph convolutional networks and graph attention networks, from graph signal denoising perspectives, and shows thatGNNs are implicitly solving graph signal Denoising problems.