scispace - formally typeset
W

Wei Wang

Researcher at Hong Kong University of Science and Technology

Publications -  78
Citations -  2687

Wei Wang is an academic researcher from Hong Kong University of Science and Technology. The author has contributed to research in topics: Scheduling (computing) & Cache. The author has an hindex of 23, co-authored 78 publications receiving 1703 citations. Previous affiliations of Wei Wang include University of Toronto & Association for Computing Machinery.

Papers
More filters
Proceedings ArticleDOI

CMFL: Mitigating Communication Overhead for Federated Learning

TL;DR: Communication-Mitigated Federated Learning provides clients with feedback information regarding the global tendency of model updating and can substantially reduce the communication overhead while still guaranteeing the learning convergence.
Proceedings Article

BatchCrypt: Efficient homomorphic encryption for cross-silo federated learning

TL;DR: BatchCrypt is presented, a system solution for cross-silo FL that substantially reduces the encryption and communication overhead caused by HE, and develops new quantization and encoding schemes along with a novel gradient clipping technique.
Journal ArticleDOI

Multi-Resource Fair Allocation in Heterogeneous Cloud Computing Systems

TL;DR: Large-scale simulations driven by Google cluster traces show that DRFH significantly outperforms the traditional slot-based scheduler, leading to much higher resource utilization with substantially shorter job completion times.
Posted Content

Dominant Resource Fairness in Cloud Computing Systems with Heterogeneous Servers

TL;DR: Large-scale simulations driven by Google cluster traces show that DRFH significantly outperforms the traditional slot-based scheduler, leading to much higher resource utilization with substantially shorter job completion times.
Proceedings Article

MArk: Exploiting Cloud Services for Cost-Effective, SLO-Aware Machine Learning Inference Serving.

TL;DR: This paper tackles the dual challenge of SLO compliance and cost effectiveness with MArk (Model Ark), a general-purpose inference serving system built in Amazon Web Services (AWS), and evaluated the performance of MArk using several state-of-the-art ML models trained in popular frameworks including TensorFlow, MXNet, and Keras.