M
Mu Li
Researcher at Microsoft
Publications - 96
Citations - 4087
Mu Li is an academic researcher from Microsoft. The author has contributed to research in topics: Machine translation & Phrase. The author has an hindex of 31, co-authored 91 publications receiving 3752 citations. Previous affiliations of Mu Li include Harbin Institute of Technology.
Papers
More filters
Posted Content
Achieving Human Parity on Automatic Chinese to English News Translation
Hany Hassan,Anthony Aue,Chang Chen,Vishal Chowdhary,Jonathan H. Clark,Christian Federmann,Xuedong Huang,Marcin Junczys-Dowmunt,William Lewis,Mu Li,Shujie Liu,Tie-Yan Liu,Renqian Luo,Arul Menezes,Tao Qin,Frank Seide,Xu Tan,Fei Tian,Lijun Wu,Shuangzhi Wu,Yingce Xia,Dongdong Zhang,Zhirui Zhang,Ming Zhou +23 more
TL;DR: It is found that Microsoft's latest neural machine translation system has reached a new state-of-the-art, and that the translation quality is at human parity when compared to professional human translations.
Journal ArticleDOI
Chinese Word Segmentation and Named Entity Recognition: A Pragmatic Approach
TL;DR: A pragmatic mathematical framework in which segmenting known words and detecting unknown words of different types can be performed simultaneously in a unified way is proposed and implemented in an adaptive Chinese word segmenter, called MSRSeg, which is described in detail.
Proceedings Article
An Improved Chinese Word Segmentation System with Conditional Random Field
Hai Zhao,Changning Huang,Mu Li +2 more
TL;DR: It is found that the use of a 6-tag set, tone feature of Chinese character and assistant segmenters trained on other corpora further improve Chinese word segmentation performance.
Proceedings ArticleDOI
Hierarchical Recurrent Neural Network for Document Modeling
TL;DR: A novel hierarchical recurrent neural network language model (HRNNLM) for document modeling that integrates it as the sentence history information into the word level RNN to predict the word sequence with cross-sentence contextual information.
Book ChapterDOI
Learning Entity Representation for Entity Disambiguation
TL;DR: A novel disambiguation model, based on neural networks that learns distributed representation of entity to measure similarity without man-made features, achieves a good performance on two datasets without any manually designed features.