scispace - formally typeset
D

Dong Yu

Researcher at Tencent

Publications -  389
Citations -  45733

Dong Yu is an academic researcher from Tencent. The author has contributed to research in topics: Artificial neural network & Word error rate. The author has an hindex of 72, co-authored 339 publications receiving 39098 citations. Previous affiliations of Dong Yu include Peking University & Microsoft.

Papers
More filters
Book ChapterDOI

Training and Decoding Speedup

Dong Yu, +1 more
TL;DR: This chapter describes the parallel training algorithms such as pipelined backpropagation algorithm, asynchronous stochastic gradient descend algorithm, and augmented Lagrange multiplier algorithm and introduces model size reduction techniques based on low-rank approximation which can speedup both training and decoding.

NeuralKalman: A Learnable Kalman Filter for Acoustic Echo Cancellation

TL;DR: In this article , the authors integrate the frequency domain Kalman filter (FDKF) and deep neural networks (DNNs) into a hybrid method, called NeuralKalman, to leverage the advantages of deep learning and adaptive filtering algorithms.
Journal ArticleDOI

Cooperative Fault-Tolerant Control for a Class of Nonlinear MASs by Resilient Learning Approach.

TL;DR: In this paper , a learning-based resilient fault-tolerant control method is proposed for a class of uncertain nonlinear multiagent systems (MASs) to enhance the security and reliability against denial-of-service (DoS) attacks and actuator faults.
Patent

Configurable grammar templates

TL;DR: In this article, grammar extensions are provided that allow application developers to selectively include portions of grammar templates and to easily combine grammar elements to form new grammar structures to provide application developers with the ability to easily form customized grammars.
Posted Content

A Comparison of Lattice-free Discriminative Training Criteria for Purely Sequence-Trained Neural Network Acoustic Models

Chao Weng, +1 more
- 08 Nov 2018 - 
TL;DR: It is demonstrated that, analogous to LF-MMI, a neural network acoustic model can also be trained from scratch using LF-bMMI or LF-sMBR criteria respectively without the need of cross-entropy pre-training.