scispace - formally typeset
Open AccessPosted Content

Deep Graph Library: A Graph-Centric, Highly-Performant Package for Graph Neural Networks

TLDR
DGL distills the computational patterns of GNNs into a few generalized sparse tensor operations suitable for extensive parallelization and allows users to easily port and leverage the existing components across multiple deep learning frameworks.
Abstract
Advancing research in the emerging field of deep graph learning requires new tools to support tensor computation over graphs. In this paper, we present the design principles and implementation of Deep Graph Library (DGL). DGL distills the computational patterns of GNNs into a few generalized sparse tensor operations suitable for extensive parallelization. By advocating graph as the central programming abstraction, DGL can perform optimizations transparently. By cautiously adopting a framework-neutral design, DGL allows users to easily port and leverage the existing components across multiple deep learning frameworks. Our evaluation shows that DGL significantly outperforms other popular GNN-oriented frameworks in both speed and memory consumption over a variety of benchmarks and has little overhead for small scale workloads.

read more

Citations
More filters
Proceedings ArticleDOI

GCC: Graph Contrastive Coding for Graph Neural Network Pre-Training

TL;DR: Graph Contrastive Coding (GCC) is designed --- a self-supervised graph neural network pre-training framework --- to capture the universal network topological properties across multiple networks and leverage contrastive learning to empower graph neural networks to learn the intrinsic and transferable structural representations.
Proceedings ArticleDOI

GCC: Graph Contrastive Coding for Graph Neural Network Pre-Training

TL;DR: GCC as mentioned in this paper proposes a self-supervised graph neural network pre-training framework to capture the universal network topological properties across multiple networks and leverage contrastive learning to empower graph neural networks.
Journal Article

Improving Graph Neural Network Expressivity via Subgraph Isomorphism Counting

TL;DR: Graph Substructure Networks (GSN) is proposed, a topologically-aware message passing scheme based on substructure encoding that allows for multiple attractive properties of standard GNNs such as locality and linear network complexity, while being able to disambiguate even hard instances of graph isomorphism.
Posted Content

Machine Learning on Graphs: A Model and Comprehensive Taxonomy

TL;DR: A comprehensive taxonomy of representation learning methods for graph-structured data is proposed, aiming to unify several disparate bodies of work and provide a solid foundation for understanding the intuition behind these methods, and enables future research in the area.
References
More filters
Proceedings Article

Attention is All you Need

TL;DR: This paper proposed a simple network architecture based solely on an attention mechanism, dispensing with recurrence and convolutions entirely and achieved state-of-the-art performance on English-to-French translation.
Posted Content

Semi-Supervised Classification with Graph Convolutional Networks

TL;DR: A scalable approach for semi-supervised learning on graph-structured data that is based on an efficient variant of convolutional neural networks which operate directly on graphs which outperforms related methods by a significant margin.
Proceedings ArticleDOI

TensorFlow: a system for large-scale machine learning

TL;DR: TensorFlow as mentioned in this paper is a machine learning system that operates at large scale and in heterogeneous environments, using dataflow graphs to represent computation, shared state, and the operations that mutate that state.
Proceedings ArticleDOI

DeepWalk: online learning of social representations

TL;DR: DeepWalk as mentioned in this paper uses local information obtained from truncated random walks to learn latent representations by treating walks as the equivalent of sentences, which encode social relations in a continuous vector space, which is easily exploited by statistical models.
Related Papers (5)