scispace - formally typeset
H

HyoukJoong Lee

Researcher at Google

Publications -  30
Citations -  3328

HyoukJoong Lee is an academic researcher from Google. The author has contributed to research in topics: Compiler & Domain-specific language. The author has an hindex of 23, co-authored 30 publications receiving 2539 citations. Previous affiliations of HyoukJoong Lee include Stanford University.

Papers
More filters
Posted Content

GPipe: Efficient Training of Giant Neural Networks using Pipeline Parallelism

TL;DR: GPipe is introduced, a pipeline parallelism library that allows scaling any network that can be expressed as a sequence of layers by pipelining different sub-sequences of layers on separate accelerators, resulting in almost linear speedup when a model is partitioned across multiple accelerators.
Proceedings Article

GPipe: Efficient Training of Giant Neural Networks using Pipeline Parallelism

TL;DR: TensorPipe as mentioned in this paper is a pipeline parallelism library that allows scaling any network that can be expressed as a sequence of layers by pipelining different sub-sequences of layers on separate accelerators.
Posted Content

GShard: Scaling Giant Models with Conditional Computation and Automatic Sharding

TL;DR: GShard enabled us to scale up multilingual neural machine translation Transformer model with Sparsely-Gated Mixture-of-Experts beyond 600 billion parameters using automatic sharding and it is demonstrated that such a giant model can efficiently be trained on 2048 TPU v3 accelerators in 4 days to achieve far superior quality for translation from 100 languages to English compared to the prior art.
Posted Content

Lingvo: a Modular and Scalable Framework for Sequence-to-Sequence Modeling

TL;DR: This document outlines the underlying design of Lingvo and serves as an introduction to the various pieces of the framework, while also offering examples of advanced features that showcase the capabilities of the Framework.
Proceedings Article

OptiML: An Implicitly Parallel Domain-Specific Language for Machine Learning

TL;DR: OptiML is an implicitly parallel, expressive and high performance alternative to MATLAB and C++ and shows that OptiML outperforms explicitly parallelized MATLAB code in nearly all cases.