Proceedings ArticleDOI
Speech-Transformer: A No-Recurrence Sequence-to-Sequence Model for Speech Recognition
Linhao Dong,Shuang Xu,Bo Xu +2 more
- pp 5884-5888
TLDR
The Speech-Transformer is presented, a no-recurrence sequence-to-sequence model entirely relies on attention mechanisms to learn the positional dependencies, which can be trained faster with more efficiency and a 2D-Attention mechanism which can jointly attend to the time and frequency axes of the 2-dimensional speech inputs, thus providing more expressive representations for the Speech- Transformer.Abstract:
Recurrent sequence-to-sequence models using encoder-decoder architecture have made great progress in speech recognition task. However, they suffer from the drawback of slow training speed because the internal recurrence limits the training parallelization. In this paper, we present the Speech-Transformer, a no-recurrence sequence-to-sequence model entirely relies on attention mechanisms to learn the positional dependencies, which can be trained faster with more efficiency. We also propose a 2D-Attention mechanism, which can jointly attend to the time and frequency axes of the 2-dimensional speech inputs, thus providing more expressive representations for the Speech-Transformer. Evaluated on the Wall Street Journal (WSJ) speech recognition dataset, our best model achieves competitive word error rate (WER) of 10.9%, while the whole training process only takes 1.2 days on 1 GPU, significantly faster than the published results of recurrent sequence-to-sequence models.read more
Citations
More filters
Posted Content
Conformer: Convolution-augmented Transformer for Speech Recognition
Anmol Gulati,James Qin,Chung-Cheng Chiu,Niki Parmar,Yu Zhang,Jiahui Yu,Wei Han,Shibo Wang,Zhengdong Zhang,Yonghui Wu,Ruoming Pang +10 more
TL;DR: This work proposes the convolution-augmented transformer for speech recognition, named Conformer, which significantly outperforms the previous Transformer and CNN based models achieving state-of-the-art accuracies.
Proceedings ArticleDOI
A Comparative Study on Transformer vs RNN in Speech Applications
Shigeki Karita,Xiaofei Wang,Shinji Watanabe,Takenori Yoshimura,Wangyou Zhang,Nanxin Chen,Tomoki Hayashi,Takaaki Hori,Hirofumi Inaguma,Ziyan Jiang,Masao Someki,Nelson Yalta,Ryuichi Yamamoto +12 more
TL;DR: Transformer as mentioned in this paper is an emergent sequence-to-sequence model which achieves state-of-the-art performance in neural machine translation and other natural language processing applications, such as automatic speech recognition (ASR), speech translation (ST), and text to speech (TTS).
Proceedings ArticleDOI
Transformer Transducer: A Streamable Speech Recognition Model with Transformer Encoders and RNN-T Loss
TL;DR: An end-to-end speech recognition model with Transformer encoders that can be used in a streaming speech recognition system and shows that the full attention version of the model beats the-state-of-the art accuracy on the LibriSpeech benchmarks.
Proceedings ArticleDOI
A Comparative Study on Transformer vs RNN in Speech Applications
Shigeki Karita,Nanxin Chen,Tomoki Hayashi,Takaaki Hori,Hirofumi Inaguma,Ziyan Jiang,Masao Someki,Nelson Yalta,Ryuichi Yamamoto,Xiaofei Wang,Shinji Watanabe,Takenori Yoshimura,Wangyou Zhang +12 more
TL;DR: An emergent sequence-to-sequence model called Transformer achieves state-of-the-art performance in neural machine translation and other natural language processing applications, including the surprising superiority of Transformer in 13/15 ASR benchmarks in comparison with RNN.
Proceedings ArticleDOI
Transformer-Based Acoustic Modeling for Hybrid Speech Recognition
Yongqiang Wang,Abdelrahman Mohamed,Due Le,Chunxi Liu,Alex Xiao,Jay Mahadeokar,Hongzhao Huang,Andros Tjandra,Xiaohui Zhang,Frank Zhang,Christian Fuegen,Geoffrey Zweig,Michael L. Seltzer +12 more
TL;DR: This article proposed and evaluated transformer-based acoustic models (AMs) for hybrid speech recognition, including various positional embedding methods and an iterated loss to enable training deep transformers.
References
More filters
Proceedings Article
Adam: A Method for Stochastic Optimization
Diederik P. Kingma,Jimmy Ba +1 more
TL;DR: This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.
Posted Content
TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems
Martín Abadi,Ashish Agarwal,Paul Barham,Eugene Brevdo,Zhifeng Chen,Craig Citro,Greg S. Corrado,Andy Davis,Jeffrey Dean,Matthieu Devin,Sanjay Ghemawat,Ian Goodfellow,Andrew Harp,Geoffrey Irving,Michael Isard,Yangqing Jia,Rafal Jozefowicz,Lukasz Kaiser,Manjunath Kudlur,Josh Levenberg,Dan Mané,Rajat Monga,Sherry Moore,Derek G. Murray,Chris Olah,Mike Schuster,Jonathon Shlens,Benoit Steiner,Ilya Sutskever,Kunal Talwar,Paul A. Tucker,Vincent Vanhoucke,Vijay K. Vasudevan,Fernanda B. Viégas,Oriol Vinyals,Pete Warden,Martin Wattenberg,Martin Wicke,Yuan Yu,Xiaoqiang Zheng +39 more
TL;DR: The TensorFlow interface and an implementation of that interface that is built at Google are described, which has been used for conducting research and for deploying machine learning systems into production across more than a dozen areas of computer science and other fields.
Posted Content
Empirical evaluation of gated recurrent neural networks on sequence modeling
TL;DR: These advanced recurrent units that implement a gating mechanism, such as a long short-term memory (LSTM) unit and a recently proposed gated recurrent unit (GRU), are found to be comparable to LSTM.
Posted Content
Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation
Yonghui Wu,Mike Schuster,Zhifeng Chen,Quoc V. Le,Mohammad Norouzi,Wolfgang Macherey,Maxim Krikun,Yuan Cao,Qin Gao,Klaus Macherey,Jeff Klingner,Apurva Shah,Melvin Johnson,Xiaobing Liu,Łukasz Kaiser,Stephan Gouws,Yoshikiyo Kato,Taku Kudo,Hideto Kazawa,Keith Stevens,George Kurian,Nishant Patil,Wei Wang,Cliff Young,Jason A. Smith,Jason Riesa,Alex Rudnick,Oriol Vinyals,Greg S. Corrado,Macduff Hughes,Jeffrey Dean +30 more
TL;DR: GNMT, Google's Neural Machine Translation system, is presented, which attempts to address many of the weaknesses of conventional phrase-based translation systems and provides a good balance between the flexibility of "character"-delimited models and the efficiency of "word"-delicited models.
Posted Content
Speech Recognition with Deep Recurrent Neural Networks
TL;DR: In this paper, deep recurrent neural networks (RNNs) are used to combine the multiple levels of representation that have proved so effective in deep networks with the flexible use of long range context that empowers RNNs.