scispace - formally typeset
D

Dong Yu

Researcher at Tencent

Publications -  389
Citations -  45733

Dong Yu is an academic researcher from Tencent. The author has contributed to research in topics: Artificial neural network & Word error rate. The author has an hindex of 72, co-authored 339 publications receiving 39098 citations. Previous affiliations of Dong Yu include Peking University & Microsoft.

Papers
More filters
Proceedings ArticleDOI

Semantic Role Labeling Guided Multi-turn Dialogue ReWriter

TL;DR: Semantic role labeling (SRL), which highlights the core semantic information of who did what to whom, is proposed to provide additional guidance for the rewriter model to improve a RoBERTa-based model that already outperforms previous state-of-the-art systems.
Proceedings Article

Learning Methods in Multilingual Speech Recognition

TL;DR: Two learning methods, semiautomatic unit selection and global phonetic decision tree, are introduced to address the issue of effective utilization of acoustic data from multiple languages via effective utilization from multiple source languages.
Proceedings ArticleDOI

Cross-lingual speech recognition under runtime resource constraints

TL;DR: The results show that the AM merging technique performs the best, achieving 60% relative WER reduction over the IPA-based technique.

A Bidirectional Target Filtering Model of Speech Coarticulation: two-stage Implementation for Phonetic Recognition

TL;DR: In this article, a structured generative model of speech coarticulation and reduction is described with a novel two-stage implementation, where the dynamics of formants or vocal tract resonances (VTRs) in fluent speech are generated using prior information of resonance targets in the phone sequence, in absence of acoustic data.
Posted Content

Highway Long Short-Term Memory RNNs for Distant Speech Recognition

TL;DR: In this article, the authors extend the DLSTM by introducing gated direct connections between memory cells in adjacent layers, called highway connections, which enable unimpeded information flow across different layers and thus alleviate the gradient vanishing problem when building deeper LSTMs.