Y
Yulong Gu
Researcher at Tsinghua University
Publications - 20
Citations - 450
Yulong Gu is an academic researcher from Tsinghua University. The author has contributed to research in topics: Recommender system & Collaborative filtering. The author has an hindex of 8, co-authored 20 publications receiving 186 citations. Previous affiliations of Yulong Gu include Alibaba Group.
Papers
More filters
Proceedings ArticleDOI
Neural Interactive Collaborative Filtering
TL;DR: The key insight is that the satisfied recommendations triggered by the exploration recommendation can be viewed as the exploration bonus (delayed reward) for its contribution on improving the quality of the user profile.
Proceedings ArticleDOI
Semi-supervised user profiling with heterogeneous graph attention networks
Weijian Chen,Yulong Gu,Zhaochun Ren,Xiangnan He,Hongtao Xie,Tong Guo,Dawei Yin,Yongdong Zhang +7 more
TL;DR: The authors' heterogeneous graph attention networks (HGAT) method learns the representation for each entity by accounting for the graph structure, and exploits the attention mechanism to discriminate the importance of each neighbor entity.
Proceedings ArticleDOI
Hierarchical User Profiling for E-commerce Recommender Systems
TL;DR: This paper proposes HUP, a Hierarchical User Profiling framework, a Pyramid Recurrent Neural Networks, equipped with Behavior-LSTM to formulate users' hierarchical real-time interests at multiple scales, and demonstrates the significant performance gains of the HUP against state-of-the-art methods for the hierarchical user profiling and recommendation problems.
Proceedings ArticleDOI
Deep Multifaceted Transformers for Multi-objective Ranking in Large-Scale E-commerce Recommender Systems
TL;DR: Deep Multifaceted Transformers (DMT) is proposed, a novel framework that can model users' multiple types of behavior sequences simultaneously with multiple Transformers and utilizes Multi-gate Mixture-of-Experts to optimize multiple objectives.
Proceedings ArticleDOI
Neural Interactive Collaborative Filtering
TL;DR: In this paper, the exploration policy is encoded in the weights of multi-channel stacked self-attention neural networks and trained with efficient Q-learning by maximizing users' overall satisfaction in the recommender systems.