scispace - formally typeset
Y

Yuqing Du

Researcher at University of Hong Kong

Publications -  27
Citations -  999

Yuqing Du is an academic researcher from University of Hong Kong. The author has contributed to research in topics: Edge device & Enhanced Data Rates for GSM Evolution. The author has an hindex of 9, co-authored 20 publications receiving 529 citations.

Papers
More filters
Journal ArticleDOI

Toward an Intelligent Edge: Wireless Communication Meets Machine Learning

TL;DR: In this article, the authors advocate a new set of design guidelines for wireless communication in edge learning, collectively called learning-driven communication, and provide examples to demonstrate the effectiveness of these design guidelines.
Journal ArticleDOI

One-Bit Over-the-Air Aggregation for Communication-Efficient Federated Edge Learning: Design and Convergence Analysis

TL;DR: A comprehensive analysis of the effects of wireless channel hostilities on the convergence rate of the proposed FEEL scheme is provided, showing that the hostilities slow down the convergence of the learning process by introducing a scaling factor and a bias term into the gradient norm.
Journal ArticleDOI

Energy-Efficient Resource Management for Federated Edge Learning with CPU-GPU Heterogeneous Computing

TL;DR: In this paper, the authors proposed a joint computation-and-communication resource management (C2RM) framework for federated edge learning (FEEL) to minimize the sum energy consumption of devices.
Journal ArticleDOI

High-Dimensional Stochastic Gradient Quantization for Communication-Efficient Edge Learning

TL;DR: A novel framework of hierarchical gradient quantization that is proved to guarantee model convergency by analyzing the convergence rate as a function of quantization bits and to substantially reduce the communication overhead compared with the state-of-the-art signSGD scheme.
Proceedings ArticleDOI

High-Dimensional Stochastic Gradient Quantization for Communication-Efficient Edge Learning

TL;DR: This work proposes a novel gradient compression scheme that shows that similar learning performance can be achieved with substantially lower communication overhead as compared to the one-bit scalar quantization schemes used in the state-of-the-art design, namely signed SGD.