scispace - formally typeset
W

Wayne Xin Zhao

Researcher at Renmin University of China

Publications -  232
Citations -  8299

Wayne Xin Zhao is an academic researcher from Renmin University of China. The author has contributed to research in topics: Computer science & Recommender system. The author has an hindex of 27, co-authored 144 publications receiving 4217 citations. Previous affiliations of Wayne Xin Zhao include Peking University.

Papers
More filters
Book ChapterDOI

Comparing twitter and traditional media using topic models

TL;DR: This paper empirically compare the content of Twitter with a traditional news medium, New York Times, using unsupervised topic modeling, and finds interesting and useful findings for downstream IR or DM applications.
Journal ArticleDOI

Heterogeneous Information Network Embedding for Recommendation

TL;DR: A novel heterogeneous network embedding based approach for HIN based recommendation, called HERec is proposed, which shows the capability of the HERec model for the cold-start problem, and reveals that the transformed embedding information from HINs can improve the recommendation performance.
Proceedings ArticleDOI

Leveraging Meta-path based Context for Top- N Recommendation with A Neural Co-Attention Model

TL;DR: A novel deep neural network with the co-attention mechanism for leveraging rich meta-path based context for top-N recommendation and performs well in the cold-start scenario and has potentially good interpretability for the recommendation results.
Proceedings ArticleDOI

Improving Sequential Recommendation with Knowledge-Enhanced Memory Networks

TL;DR: This paper proposes a novel knowledge enhanced sequential recommender that integrates the RNN-based networks with Key-Value Memory Network (KV-MN) and incorporates knowledge base information to enhance the semantic representation of KV- MN.
Proceedings ArticleDOI

Multi-Turn Response Selection for Chatbots with Deep Attention Matching Network

TL;DR: This paper investigates matching a response with its multi-turn context using dependency information based entirely on attention using Transformer in machine translation and extends the attention mechanism in two ways, which jointly introduce those two kinds of attention in one uniform neural network.