scispace - formally typeset
Y

Yujun Lin

Researcher at Massachusetts Institute of Technology

Publications -  31
Citations -  3869

Yujun Lin is an academic researcher from Massachusetts Institute of Technology. The author has contributed to research in topics: Deep learning & Hardware acceleration. The author has an hindex of 15, co-authored 29 publications receiving 2232 citations. Previous affiliations of Yujun Lin include Yale University & Tsinghua University.

Papers
More filters
Posted Content

Deep Gradient Compression: Reducing the Communication Bandwidth for Distributed Training

TL;DR: Deep Gradient Compression (DGC) as mentioned in this paper employs momentum correction, local gradient clipping, momentum factor masking, and warm-up training to preserve accuracy during compression, and achieves a gradient compression ratio from 270x to 600x without losing accuracy.
Proceedings Article

Deep Gradient Compression: Reducing the Communication Bandwidth for Distributed Training

TL;DR: Deep Gradient Compression (DGC) as mentioned in this paper employs momentum correction, local gradient clipping, momentum factor masking, and warm-up training to preserve accuracy during compression, and achieves a gradient compression ratio from 270x to 600x without losing accuracy.
Proceedings ArticleDOI

HAQ: Hardware-Aware Automated Quantization With Mixed Precision

TL;DR: Huang et al. as discussed by the authors introduced the Hardware-Aware Automated Quantization (HAQ) framework which leverages the reinforcement learning to automatically determine the quantization policy, and took the hardware accelerator's feedback in the design loop.
Posted Content

HAQ: Hardware-Aware Automated Quantization with Mixed Precision

TL;DR: The Hardware-Aware Automated Quantization (HAQ) framework is introduced which leverages the reinforcement learning to automatically determine the quantization policy, and takes the hardware accelerator's feedback in the design loop to generate direct feedback signals to the RL agent.
Book ChapterDOI

Searching Efficient 3D Architectures with Sparse Point-Voxel Convolution

TL;DR: In this paper, the authors propose Sparse Point-Voxel Convolution (SPVConv), a lightweight 3D module that equips the vanilla Sparse Convolution with the high-resolution point-based branch.