C
Chao Ma
Researcher at Amazon.com
Publications - 7
Citations - 1348
Chao Ma is an academic researcher from Amazon.com. The author has contributed to research in topics: Deep learning & Scalability. The author has an hindex of 5, co-authored 7 publications receiving 544 citations.
Papers
More filters
Posted Content
Deep Graph Library: A Graph-Centric, Highly-Performant Package for Graph Neural Networks
Minjie Wang,Da Zheng,Zihao Ye,Quan Gan,Mufei Li,Xiang Song,Jinjing Zhou,Chao Ma,Lingfan Yu,Yu Gai,Tianjun Xiao,Tong He,George Karypis,Jinyang Li,Zheng Zhang +14 more
TL;DR: DGL distills the computational patterns of GNNs into a few generalized sparse tensor operations suitable for extensive parallelization and allows users to easily port and leverage the existing components across multiple deep learning frameworks.
Posted Content
Deep Graph Library: Towards Efficient and Scalable Deep Learning on Graphs
Minjie Wang,Lingfan Yu,Da Zheng,Quan Gan,Yu Gai,Zihao Ye,Mufei Li,Jinjing Zhou,Qi Huang,Chao Ma,Ziyue Huang,Qipeng Guo,Hao Zhang,Haibin Lin,Junbo Zhao,Jinyang Li,Alexander J. Smola,Zheng Zhang +17 more
TL;DR: Deep Graph Library (DGL) enables arbitrary message handling and mutation operators, flexible propagation rules, and is framework agnostic so as to leverage high-performance tensor, autograd operations, and other feature extraction modules already available in existing frameworks.
Proceedings ArticleDOI
DGL-KE: Training Knowledge Graph Embeddings at Scale
Da Zheng,Xiang Song,Chao Ma,Zeyuan Tan,Zihao Ye,Jin Dong,Hao Xiong,Zheng Zhang,George Karypis +8 more
TL;DR: DGL-KE introduces various novel optimizations that accelerate training on knowledge graphs with millions of nodes and billions of edges using multi-processing, multi-GPU, and distributed parallelism to increase data locality, reduce communication overhead, overlap computations with memory accesses, and achieve high operation efficiency.
Posted Content
DistDGL: Distributed Graph Neural Network Training for Billion-Scale Graphs
Da Zheng,Chao Ma,Minjie Wang,Jinjing Zhou,Qidong Su,Xiang Song,Quan Gan,Zheng Zhang,George Karypis +8 more
TL;DR: The results show that DistDGL achieves linear speedup without compromising model accuracy and requires only 13 seconds to complete a training epoch for a graph with 100 million nodes and 3 billion edges on a cluster with 16 machines.
Proceedings ArticleDOI
DistDGL: Distributed Graph Neural Network Training for Billion-Scale Graphs
Da Zheng,Chao Ma,Minjie Wang,Jinjing Zhou,Qidong Su,Xiang Song,Quan Gan,Zheng Zhang,George Karypis +8 more
TL;DR: DistDGL as mentioned in this paper is a system for training GNNs in a mini-batch fashion on a cluster of machines based on the Deep Graph Library (DGL), a popular GNN development framework.