Open AccessProceedings Article
Inductive Representation Learning on Large Graphs
William L. Hamilton,Zhitao Ying,Jure Leskovec +2 more
- Vol. 30, pp 1024-1034
TLDR
GraphSAGE as mentioned in this paper is a general, inductive framework that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings instead of training individual embedding for each node.Abstract:
Low-dimensional embeddings of nodes in large graphs have proved extremely useful in a variety of prediction tasks, from content recommendation to identifying protein functions. However, most existing approaches require that all nodes in the graph are present during training of the embeddings; these previous approaches are inherently transductive and do not naturally generalize to unseen nodes. Here we present GraphSAGE, a general, inductive framework that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings. Instead of training individual embeddings for each node, we learn a function that generates embeddings by sampling and aggregating features from a node's local neighborhood. Our algorithm outperforms strong baselines on three inductive node-classification benchmarks: we classify the category of unseen nodes in evolving information graphs based on citation and Reddit post data, and we show that our algorithm generalizes to completely unseen graphs using a multi-graph dataset of protein-protein interactions.read more
Citations
More filters
Posted Content
Uncertainty-Matching Graph Neural Networks to Defend Against Poisoning Attacks
TL;DR: This work proposes to build a surrogate predictor that does not directly access the graph structure, but systematically extracts reliable knowledge from a standard GNN through a novel uncertainty-matching strategy, which makes UM-GNN immune to evasion attacks by design, and achieves significantly improved robustness against poisoning attacks.
Posted Content
A Unifying Generative Model for Graph Learning Algorithms: Label Propagation, Graph Convolutions, and Combinations
Junteng Jia,Austin R. Benson +1 more
TL;DR: In this paper, a Markov random field model for the data generation process of node attributes, based on correlations of attributes on and between vertices, was developed. And a new algorithm derived from their data generation model, which is called a Linear Graph Convolution, performs extremely well in practice on empirical data, and provide theoretical justification for why this is the case.
Proceedings ArticleDOI
Beyond Localized Graph Neural Networks: An Attributed Motif Regularization Framework
TL;DR: InfoMotif as mentioned in this paper proposes attributed structural roles of nodes based on their occurrence in different network motifs, independent of network proximity, and achieves architecture independence by regularizing the node representations of arbitrary GNNs via mutual information maximization.
Posted Content
Neural Graph Embedding Methods for Natural Language Processing.
TL;DR: GCNs are utilized for Document Timestamping problem and for learning word embeddings using dependency context of a word instead of sequential context and two limitations of existing GCN models are addressed.
Posted Content
Semi-supervised Anomaly Detection on Attributed Graphs
TL;DR: The proposed method can effectively propagate label information on a small amount of nodes to unlabeled ones by taking into account the node's attributes, graph structure, and class imbalance.