S
Shuo Ren
Researcher at Beihang University
Publications - 28
Citations - 1201
Shuo Ren is an academic researcher from Beihang University. The author has contributed to research in topics: Machine translation & Computer science. The author has an hindex of 13, co-authored 23 publications receiving 492 citations.
Papers
More filters
Posted Content
GraphCodeBERT: Pre-training Code Representations with Data Flow
Daya Guo,Shuo Ren,Shuai Lu,Zhangyin Feng,Duyu Tang,Shujie Liu,Long Zhou,Nan Duan,Alexey Svyatkovskiy,Fu Shengyu,Michele Tufano,Shao Kun Deng,Colin B. Clement,Dawn Drain,Neel Sundaresan,Jian Yin,Daxin Jiang,Ming Zhou +17 more
TL;DR: Results show that code structure and newly introduced pre-training tasks can improve GraphCodeBERT and achieves state-of-the-art performance on the four downstream tasks and it is shown that the model prefers structure-level attentions over token- level attentions in the task of code search.
Posted Content
CodeBLEU: a Method for Automatic Evaluation of Code Synthesis
Shuo Ren,Daya Guo,Shuai Lu,Long Zhou,Shujie Liu,Duyu Tang,Neel Sundaresan,Ming Zhou,Ambrosio Blanco,Shuai Ma +9 more
TL;DR: This work introduces a new automatic evaluation metric, dubbed CodeBLEU, which absorbs the strength of BLEU in the n-gram match and further injects code syntax via abstract syntax trees (AST) and code semantics via data-flow and can achieve a better correlation with programmer assigned scores compared with BLEu and accuracy.
Posted Content
Style Transfer as Unsupervised Machine Translation
TL;DR: This paper takes advantage of style-preference information and word embedding similarity to produce pseudo-parallel data with a statistical machine translation (SMT) framework and introduces a style classifier to guarantee the accuracy of style transfer and penalize bad candidates in the generated pseudo data.
Posted Content
WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing
Sanyuan Chen,Chengyi Wang,Zhengyang Chen,Yu Wu,Shujie Liu,Zhuo Chen,Jinyu Li,Naoyuki Kanda,Takuya Yoshioka,Xiong Xiao,Jian Wu,Long Zhou,Shuo Ren,Yanmin Qian,Yao Qian,Michael Zeng,Furu Wei +16 more
TL;DR: WavLM as mentioned in this paper proposes a pre-trained model to solve full-stack downstream speech tasks and achieves state-of-the-art performance on the SUPERB speech recognition task.
Proceedings ArticleDOI
Knowledge-Based Semantic Embedding for Machine Translation
TL;DR: This paper builds and formulate a semantic space to connect the source and target languages, and applies it to the sequence-to-sequence framework to propose a Knowledge-Based Semantic Embedding (KBSE) method.