Q
Qun Liu
Researcher at Huawei
Publications - 497
Citations - 11928
Qun Liu is an academic researcher from Huawei. The author has contributed to research in topics: Machine translation & Computer science. The author has an hindex of 46, co-authored 383 publications receiving 8714 citations. Previous affiliations of Qun Liu include Chinese Academy of Sciences & Peking University.
Papers
More filters
Proceedings ArticleDOI
ERNIE: Enhanced Language Representation with Informative Entities
TL;DR: This paper utilizes both large-scale textual corpora and KGs to train an enhanced language representation model (ERNIE) which can take full advantage of lexical, syntactic, and knowledge information simultaneously, and is comparable with the state-of-the-art model BERT on other common NLP tasks.
Proceedings ArticleDOI
TinyBERT: Distilling BERT for Natural Language Understanding
TL;DR: TinyBERT as discussed by the authors proposes a two-stage learning framework for TinyBERT, which performs transformer distillation at both the pre-training and task-specific learning stages to capture the general-domain as well as the task specific knowledge in BERT.
Posted Content
TinyBERT: Distilling BERT for Natural Language Understanding
TL;DR: A novel Transformer distillation method that is specially designed for knowledge distillation (KD) of the Transformer-based models is proposed and, by leveraging this new KD method, the plenty of knowledge encoded in a large “teacher” BERT can be effectively transferred to a small “student” TinyBERT.
Proceedings ArticleDOI
HHMM-based Chinese Lexical Analyzer ICTCLAS
TL;DR: This document presents the results from Inst.
Proceedings ArticleDOI
Findings of the 2017 Conference on Machine Translation (WMT17)
Ondřej Bojar,Rajen Chatterjee,Christian Federmann,Yvette Graham,Barry Haddow,Shujian Huang,Matthias Huck,Philipp Koehn,Qun Liu,Varvara Logacheva,Christof Monz,Matteo Negri,Matt Post,Raphael Rubino,Lucia Specia,Marco Turchi +15 more
TL;DR: The results of the WMT17 shared tasks, which included three machine translation (MT) tasks (news, biomedical, and multimodal), two evaluation tasks (metrics and run-time estimation of MT quality), an automatic post-editing task, a neural MT training task, and a bandit learning task are presented.