scispace - formally typeset
X

Xian-Ling Mao

Researcher at Beijing Institute of Technology

Publications -  100
Citations -  1375

Xian-Ling Mao is an academic researcher from Beijing Institute of Technology. The author has contributed to research in topics: Computer science & Hash function. The author has an hindex of 11, co-authored 83 publications receiving 521 citations. Previous affiliations of Xian-Ling Mao include Microsoft.

Papers
More filters
Proceedings ArticleDOI

Global Context Enhanced Graph Neural Networks for Session-based Recommendation

TL;DR: A novel approach to exploit item transitions over all sessions in a more subtle manner for better inferring the user preference of the current session, called GCE-GNN, which outperforms the state-of-the-art methods consistently.
Proceedings ArticleDOI

InfoXLM: An Information-Theoretic Framework for Cross-Lingual Language Model Pre-Training

TL;DR: An information-theoretic framework that formulates cross-lingual language model pre- training as maximizing mutual information between multilingual-multi-granularity texts is presented and a new pre-training task based on contrastive learning is proposed.
Proceedings ArticleDOI

Global Context Enhanced Graph Neural Networks for Session-based Recommendation

TL;DR: Zhang et al. as discussed by the authors proposed a Global Context Enhanced Graph Neural Networks (GCE-GNN) to exploit item transitions over all sessions in a more subtle manner for better inferring the user preference of the current session.
Journal ArticleDOI

Cross-Lingual Natural Language Generation via Pre-Training

TL;DR: Experimental results on question generation and abstractive summarization show that the model outperforms the machine-translation-based pipeline methods for zero-shot cross-lingual generation and improves NLG performance of low-resource languages by leveraging rich-resource language data.
Posted Content

InfoXLM: An Information-Theoretic Framework for Cross-Lingual Language Model Pre-Training

TL;DR: The authors formulate cross-lingual language model pre-training as maximizing mutual information between multilingual-multi-granularity texts and propose a contrastive learning approach to improve the crosslingual transferability of pre-trained models.