Proceedings ArticleDOI
Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks
Alex Graves,Santiago Fernández,Faustino Gomez,Jürgen Schmidhuber +3 more
- pp 369-376
TLDR
This paper presents a novel method for training RNNs to label unsegmented sequences directly, thereby solving both problems of sequence learning and post-processing.Abstract:
Many real-world sequence learning tasks require the prediction of sequences of labels from noisy, unsegmented input data. In speech recognition, for example, an acoustic signal is transcribed into words or sub-word units. Recurrent neural networks (RNNs) are powerful sequence learners that would seem well suited to such tasks. However, because they require pre-segmented training data, and post-processing to transform their outputs into label sequences, their applicability has so far been limited. This paper presents a novel method for training RNNs to label unsegmented sequences directly, thereby solving both problems. An experiment on the TIMIT speech corpus demonstrates its advantages over both a baseline HMM and a hybrid HMM-RNN.read more
Citations
More filters
Journal ArticleDOI
Recent progresses in deep learning based acoustic models
TL;DR: In this paper, the authors summarize recent progress made in deep learning based acoustic models and the motivation and insights behind the surveyed techniques, and further illustrate robustness issues in speech recognition systems, and discuss acoustic model adaptation, speech enhancement and separation.
Proceedings ArticleDOI
Lower Frame Rate Neural Network Acoustic Models
Golan Pundak,Tara N. Sainath +1 more
TL;DR: On a large vocabulary Voice Search task, it is shown that with conventional models, one can slow the frame rate to 40ms while improving WER by 3% relative over a CTC-based model, thus improving overall system speed.
Posted Content
A Study of BFLOAT16 for Deep Learning Training
Dhiraj D. Kalamkar,Dheevatsa Mudigere,Naveen Mellempudi,Dipankar Das,Kunal Banerjee,Sasikanth Avancha,Dharma Teja Vooturi,Nataraj Jammalamadaka,Jianyu Huang,Hector Yuen,Jiyan Yang,Jongsoo Park,Alexander Heinecke,Evangelos Georganas,Sudarshan Srinivasan,Abhisek Kundu,Misha Smelyanskiy,Bharat Kaul,Pradeep Dubey +18 more
TL;DR: The results show that deep learning training using BFLOAT16 tensors achieves the same state-of-the-art (SOTA) results across domains as FP32 tensors in the same number of iterations and with no changes to hyper-parameters.
Journal ArticleDOI
TextBoxes++: A Single-Shot Oriented Scene Text Detector
TL;DR: TextBoxes++ as discussed by the authors is an end-to-end trainable fast scene text detector, which detects arbitrary-oriented scene text with both high accuracy and efficiency in a single network forward pass.
Proceedings ArticleDOI
Wav2Letter++: A Fast Open-source Speech Recognition System
Vineel Pratap,Awni Hannun,Qiantong Xu,Jeff Cai,Jacob Kahn,Gabriel Synnaeve,Vitaliy Liptchinsky,Ronan Collobert +7 more
TL;DR: Wav2letter++ is a fast open-source deep learning speech recognition framework that uses the ArrayFire tensor library for maximum efficiency and is more than 2× faster than other optimized frameworks for training end-to-end neural networks for speech recognition.
References
More filters
Journal ArticleDOI
Long short-term memory
TL;DR: A novel, efficient, gradient based method called long short-term memory (LSTM) is introduced, which can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units.
Journal ArticleDOI
A tutorial on hidden Markov models and selected applications in speech recognition
TL;DR: In this paper, the authors provide an overview of the basic theory of hidden Markov models (HMMs) as originated by L.E. Baum and T. Petrie (1966) and give practical details on methods of implementation of the theory along with a description of selected applications of HMMs to distinct problems in speech recognition.
Book
Neural networks for pattern recognition
TL;DR: This is the first comprehensive treatment of feed-forward neural networks from the perspective of statistical pattern recognition, and is designed as a text, with over 100 exercises, to benefit anyone involved in the fields of neural computation and pattern recognition.
Proceedings Article
Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data
TL;DR: This work presents iterative parameter estimation algorithms for conditional random fields and compares the performance of the resulting models to HMMs and MEMMs on synthetic and natural-language data.