scispace - formally typeset
H

Hao Yu

Researcher at University of Southern California

Publications -  52
Citations -  1565

Hao Yu is an academic researcher from University of Southern California. The author has contributed to research in topics: Convex optimization & MIMO. The author has an hindex of 14, co-authored 52 publications receiving 1112 citations. Previous affiliations of Hao Yu include Hong Kong University of Science and Technology & Huawei.

Papers
More filters
Journal ArticleDOI

Parallel Restarted SGD with Faster Convergence and Less Communication: Demystifying Why Model Averaging Works for Deep Learning

TL;DR: A thorough and rigorous theoretical study on why model averaging can work as well as parallel mini-batch SGD with significantly less communication overhead.
Proceedings Article

On the Linear Speedup Analysis of Communication Efficient Momentum SGD for Distributed Non-Convex Optimization.

TL;DR: In this article, a distributed communication efficient momentum SGD method and its linear speedup property is investigated. But it remains unclear whether any distributed momentum SGDs possesses the same linear speed-up property as distributed SGD and has reduced communication complexity.
Posted Content

Parallel Restarted SGD with Faster Convergence and Less Communication: Demystifying Why Model Averaging Works for Deep Learning

TL;DR: In this paper, the authors provide a thorough and rigorous theoretical study on why model averaging can work as well as parallel mini-batch SGD with significantly less communication overhead, and they show that the average interval can still achieve a good speed-up of the training time as long as the averaging interval is carefully controlled.
Posted Content

On the Linear Speedup Analysis of Communication Efficient Momentum SGD for Distributed Non-Convex Optimization.

TL;DR: This paper considers a distributed communication efficient momentum SGD method and proves its linear speedup property, filling the gap in the study of distributed SGD variants with reduced communication.
Proceedings Article

Online Convex Optimization with Stochastic Constraints

TL;DR: In this paper, the authors considered online convex optimization with stochastic constraints, which generalizes Zinkevich's OCO over a known simple fixed set, and proposed a new algorithm that achieves the expected regret and constraint violations.