K
Kezhi Wang
Researcher at Northumbria University
Publications - 213
Citations - 8175
Kezhi Wang is an academic researcher from Northumbria University. The author has contributed to research in topics: Computer science & Mobile edge computing. The author has an hindex of 28, co-authored 175 publications receiving 3469 citations. Previous affiliations of Kezhi Wang include Beijing Normal University & Central South University.
Papers
More filters
Proceedings ArticleDOI
Analysis and Optimization of RIS-aided Massive MIMO Systems with Statistical CSI
TL;DR: In this article, an uplink reconfigurable intelligent surface (RIS)-aided massive multiple-input multiple-output (MIMO) system with statistical channel state information (CSI) is considered.
Proceedings ArticleDOI
Secure Transmission for Intelligent Reflecting Surface Assisted Communication with Deep Learning
TL;DR: The proposed deep neural network has fully connected layers and can predict the optimal IRS reflection vector and the optimal number of IRS reflection elements with low complexity and achieves higher secure energy efficiency than the conventional methods.
Journal ArticleDOI
Performance Analysis for Channel-Weighted Federated Learning in OMA Wireless Networks
TL;DR: A channel-weighted aggregation scheme of FL (CWA-FL), in which the parameter server (PS) makes aggregation of the gradients according to the channel conditions of devices, which can avoid the synchronization issue among devices faced by over-the-air FL.
Posted Content
Reconfigurable Intelligent Surfaces for 6G Systems: Principles, Applications, and Research Directions
Cunhua Pan,Hong Ren,Kezhi Wang,Jonas Florentin Kolb,Maged Elkashlan,Ming Chen,Marco Di Renzo,Yang Hao,Jiangzhou Wang,A. Lee Swindlehurst,Xiaohu You,Lajos Hanzo +11 more
TL;DR: In this paper, the authors aim to answer four fundmental questions: 1) Why do we need RISs? 2) What is an RIS? 3] What are RIS's applications? 4) What are the relevant challenges and future research directions?
Journal ArticleDOI
Multi-UAV Trajectory Design and Power Control Based on Deep Reinforcement Learning
TL;DR: In this paper , the authors proposed a deep reinforcement learning (DRL)-based solution, i.e., soft actor-critic (SAC) to address the problem via modeling the problem as a Markov decision process (MDP), which carefully designed the reward function that combines sparse with non-sparse reward to achieve the balance between exploitation and exploration.