H
Hui Jiang
Researcher at York University
Publications - 181
Citations - 8621
Hui Jiang is an academic researcher from York University. The author has contributed to research in topics: Hidden Markov model & Artificial neural network. The author has an hindex of 35, co-authored 177 publications receiving 7439 citations. Previous affiliations of Hui Jiang include Alcatel-Lucent & University of Waterloo.
Papers
More filters
Journal ArticleDOI
Convolutional neural networks for speech recognition
TL;DR: It is shown that further error rate reduction can be obtained by using convolutional neural networks (CNNs), and a limited-weight-sharing scheme is proposed that can better model speech features.
Proceedings ArticleDOI
Enhanced LSTM for Natural Language Inference
TL;DR: This paper presents a new state-of-the-art result, achieving the accuracy of 88.6% on the Stanford Natural Language Inference Dataset, and demonstrates that carefully designing sequential inference models based on chain LSTMs can outperform all previous models.
Proceedings ArticleDOI
Applying Convolutional Neural Networks concepts to hybrid NN-HMM model for speech recognition
TL;DR: The proposed CNN architecture is applied to speech recognition within the framework of hybrid NN-HMM model to use local filtering and max-pooling in frequency domain to normalize speaker variance to achieve higher multi-speaker speech recognition performance.
Journal ArticleDOI
Confidence measures for speech recognition: A survey
TL;DR: A survey of research works related to confidence measures which have been done during the past 10–12 years is summarized and capabilities and limitations of the current CM techniques are discussed and generally comment on today’s CM approaches are generally commented on.
Proceedings ArticleDOI
Fast speaker adaptation of hybrid NN/HMM model for speech recognition based on discriminative learning of speaker code
Ossama Abdel-Hamid,Hui Jiang +1 more
TL;DR: A new fast speaker adaptation method for the hybrid NN-HMM speech recognition model that can achieve over 10% relative reduction in phone error rate by using only seven utterances for adaptation.