Open AccessProceedings Article
Inductive Representation Learning on Large Graphs
William L. Hamilton,Zhitao Ying,Jure Leskovec +2 more
- Vol. 30, pp 1024-1034
TLDR
GraphSAGE as mentioned in this paper is a general, inductive framework that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings instead of training individual embedding for each node.Abstract:
Low-dimensional embeddings of nodes in large graphs have proved extremely useful in a variety of prediction tasks, from content recommendation to identifying protein functions. However, most existing approaches require that all nodes in the graph are present during training of the embeddings; these previous approaches are inherently transductive and do not naturally generalize to unseen nodes. Here we present GraphSAGE, a general, inductive framework that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings. Instead of training individual embeddings for each node, we learn a function that generates embeddings by sampling and aggregating features from a node's local neighborhood. Our algorithm outperforms strong baselines on three inductive node-classification benchmarks: we classify the category of unseen nodes in evolving information graphs based on citation and Reddit post data, and we show that our algorithm generalizes to completely unseen graphs using a multi-graph dataset of protein-protein interactions.read more
Citations
More filters
Posted Content
Learning Two-View Correspondences and Geometry Using Order-Aware Network
Jiahui Zhang,Dawei Sun,Zixin Luo,Anbang Yao,Lei Zhou,Tianwei Shen,Yurong Chen,Long Quan,Hongen Liao +8 more
TL;DR: In this article, the authors propose an Order-Aware network, which infers the probabilities of correspondences being inliers and regresses the relative pose encoded by the essential matrix.
Journal ArticleDOI
A Deep Generative Model for Graph Layout
Oh-Hyun Kwon,Kwan-Liu Ma +1 more
TL;DR: This paper designs an encoder-decoder architecture to learn a model from a collection of example layouts, where the encoder represents training examples in a latent space and the decoder produces layouts from the latent space.
Proceedings ArticleDOI
AttPool: Towards Hierarchical Feature Representation in Graph Convolutional Networks via Attention Mechanism
TL;DR: AttPool, which is a novel graph pooling module based on attention mechanism, is proposed, able to select nodes that are significant for graph representation adaptively, and generate hierarchical features via aggregating the attention-weighted information in nodes.
Posted Content
MeshGAN: Non-linear 3D Morphable Models of Faces.
TL;DR: This paper proposes the first intrinsic GANs architecture operating directly on 3D meshes (named as MeshGAN), and results are provided to show that MeshGAN can be used to generate high-fidelity 3D face with rich identities and expressions.
Posted Content
Revisiting "Over-smoothing" in Deep GCNs.
TL;DR: This work interprets a standard GCN architecture as layerwise integration of a Multi-layer Perceptron (MLP) and graph regularization and concludes that before training, the final representation of a deep GCN does over-smooth, however, it learns anti-oversmoothing during training.