scispace - formally typeset
H

Haipeng Sun

Researcher at Harbin Institute of Technology

Publications -  23
Citations -  226

Haipeng Sun is an academic researcher from Harbin Institute of Technology. The author has contributed to research in topics: Machine translation & Computer science. The author has an hindex of 6, co-authored 17 publications receiving 124 citations.

Papers
More filters
Proceedings ArticleDOI

Unsupervised Bilingual Word Embedding Agreement for Unsupervised Neural Machine Translation.

TL;DR: Empirical results show that the proposed methods significantly outperform conventional UNMT, and two methods that train UNMT with UBWE agreement are proposed.
Journal ArticleDOI

Unsupervised Neural Machine Translation With Cross-Lingual Language Representation Agreement

TL;DR: A novel UNMT structure with cross-lingual language representation agreement is proposed to capture the interaction between UBWE/CMLM and UNMT during UNMT training to demonstrate that the proposed UNMT models improve significantly over the corresponding state-of-the-art UNMT baselines.
Proceedings ArticleDOI

Knowledge Distillation for Multilingual Unsupervised Neural Machine Translation

TL;DR: In this article, a simple method to translate between thirteen languages using a single encoder and a single decoder, making use of multilingual data to improve UNMT for all language pairs.
Proceedings ArticleDOI

NICT’s Unsupervised Neural and Statistical Machine Translation Systems for the WMT19 News Translation Task

TL;DR: The NICT’s participation in the WMT19 unsupervised news translation task is presented, with the system ranked first for the German-to-Czech translation task, using only the data provided by the organizers (“constraint’”), according to both BLEU-cased and human evaluation.
Posted Content

Knowledge Distillation for Multilingual Unsupervised Neural Machine Translation

TL;DR: A simple method to translate between thirteen languages using a single encoder and a single decoder is introduced, making use of multilingual data to improve UNMT for all language pairs and proposes two knowledge distillation methods to further enhance multilingual UNMT performance.