scispace - formally typeset
Y

Yufeng Zhan

Researcher at Beijing Institute of Technology

Publications -  39
Citations -  816

Yufeng Zhan is an academic researcher from Beijing Institute of Technology. The author has contributed to research in topics: Computer science & Cloud computing. The author has an hindex of 4, co-authored 12 publications receiving 187 citations. Previous affiliations of Yufeng Zhan include Hong Kong Polytechnic University.

Papers
More filters
Journal ArticleDOI

A Learning-Based Incentive Mechanism for Federated Learning

TL;DR: The incentive mechanism for federated learning to motivate edge nodes to contribute model training is studied and a deep reinforcement learning-based (DRL) incentive mechanism has been designed to determine the optimal pricing strategy for the parameter server and the optimal training strategies for edge nodes.
Journal ArticleDOI

A Deep Reinforcement Learning Based Offloading Game in Edge Computing

TL;DR: This article designs a decentralized algorithm for computation offloading, so that users can independently choose their offloading decisions, and addresses the challenge that users may refuse to expose their information about network bandwidth and preference.
Journal ArticleDOI

A Survey of Incentive Mechanism Design for Federated Learning

TL;DR: A taxonomy of existing incentive mechanisms for Federated learning is presented and some future directions of how to incentivize clients in federated learning have been discussed.
Proceedings ArticleDOI

Experience-Driven Computational Resource Allocation of Federated Learning by Deep Reinforcement Learning

TL;DR: This paper designs an experience-driven algorithm based on the Deep Reinforcement Learning (DRL), which can converge to the near-optimal solution without knowledge of network quality and outperforms the start-of-the-art by 40% at most.
Journal ArticleDOI

Adaptive Federated Learning on Non-IID Data with Resource Constraint

TL;DR: A deep reinforcement learning based approach has been proposed to adaptively control the training of local models and the phase of global aggregation simultaneously, which can improve the model accuracy by up to 30\%, as compared to the state-of-the-art approaches.