C
Chenglei Wu
Researcher at Tsinghua University
Publications - 22
Citations - 498
Chenglei Wu is an academic researcher from Tsinghua University. The author has contributed to research in topics: Reinforcement learning & Video quality. The author has an hindex of 7, co-authored 22 publications receiving 259 citations. Previous affiliations of Chenglei Wu include Chinese Ministry of Education.
Papers
More filters
Proceedings ArticleDOI
A Dataset for Exploring User Behaviors in VR Spherical Video Streaming
TL;DR: This paper presents a head tracking dataset composed of 48 users watching 18 sphere videos from 5 categories, and shows that people share certain common patterns in VR spherical video streaming, which are different from conventional video streaming.
Proceedings ArticleDOI
Towards Faster and Better Federated Learning: A Feature Fusion Approach
TL;DR: Experiments show that the federated learning algorithm with feature fusion mechanism outperforms baselines in both accuracy and generalization ability while reducing the number of communication rounds by more than 60%.
Proceedings ArticleDOI
Comyco: Quality-Aware Adaptive Video Streaming via Imitation Learning
TL;DR: This paper proposes Comyco, a video quality-aware ABR approach that enormously improves the learning-based methods by tackling low sample efficiency and lack of awareness of the video quality information.
Posted Content
Federated Learning with Additional Mechanisms on Clients to Reduce Communication Costs.
TL;DR: The proposed FedMMD is adopting a two-stream model with the MMD (Maximum Mean Discrepancy) constraint instead of a single model in vanilla FedAvg to be trained on devices to achieve higher accuracy at fewer communication costs.
Journal ArticleDOI
A Spherical Convolution Approach for Learning Long Term Viewport Prediction in 360 Immersive Video
TL;DR: A recurrent neural network (RNN) network is adopted to extract a user’s personal preference of 360 video content from minutes of embedded viewing histories and utilizes this semantic preference as spatial attention to help network find the “interested” regions on a future video.