Proceedings ArticleDOI
Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks
Alex Graves,Santiago Fernández,Faustino Gomez,Jürgen Schmidhuber +3 more
- pp 369-376
TLDR
This paper presents a novel method for training RNNs to label unsegmented sequences directly, thereby solving both problems of sequence learning and post-processing.Abstract:
Many real-world sequence learning tasks require the prediction of sequences of labels from noisy, unsegmented input data. In speech recognition, for example, an acoustic signal is transcribed into words or sub-word units. Recurrent neural networks (RNNs) are powerful sequence learners that would seem well suited to such tasks. However, because they require pre-segmented training data, and post-processing to transform their outputs into label sequences, their applicability has so far been limited. This paper presents a novel method for training RNNs to label unsegmented sequences directly, thereby solving both problems. An experiment on the TIMIT speech corpus demonstrates its advantages over both a baseline HMM and a hybrid HMM-RNN.read more
Citations
More filters
Journal ArticleDOI
Multi-stream LSTM-HMM decoding and histogram equalization for noise robust keyword spotting
TL;DR: It is shown how contextual information can be effectively exploited in a multi-stream ASR framework that dynamically models context-sensitive phoneme estimates generated by a long short-term memory neural network to better cope with conversational speaking styles.
Proceedings ArticleDOI
Offline Persian Handwriting Recognition with CNN and RNN-CTC
TL;DR: It is shown that the deep convolutional neural networks and recurrent neural networks combination is a robust recognizer for the Persian language as it was for other fields of application such as scene text recognition.
Posted Content
Text Detection and Recognition in the Wild: A Review.
TL;DR: This survey offers a review on the recent advancement in scene text detection and recognition, and presents the results of conducting extensive experiments using a unified evaluation framework that assesses pre-trained models of the selected methods on challenging cases, and applies the same evaluation criteria on these techniques.
Proceedings ArticleDOI
End-to-End Code-Switching ASR for Low-Resourced Language Pairs
TL;DR: An E2E ASR pipeline for the recognition of CS speech in which a low-resourced language is mixed with a high resourced language to enable better utilization of the available textual resources.
Posted Content
Handwritten digit string recognition by combination of residual network and RNN-CTC
TL;DR: This paper designs a residual network to extract features from input images, then employs a RNN to model the contextual information within feature sequences and predict recognition results, and uses a standard CTC to calculate the loss and yield the final results.
References
More filters
Journal ArticleDOI
Long short-term memory
TL;DR: A novel, efficient, gradient based method called long short-term memory (LSTM) is introduced, which can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units.
Journal ArticleDOI
A tutorial on hidden Markov models and selected applications in speech recognition
TL;DR: In this paper, the authors provide an overview of the basic theory of hidden Markov models (HMMs) as originated by L.E. Baum and T. Petrie (1966) and give practical details on methods of implementation of the theory along with a description of selected applications of HMMs to distinct problems in speech recognition.
Book
Neural networks for pattern recognition
TL;DR: This is the first comprehensive treatment of feed-forward neural networks from the perspective of statistical pattern recognition, and is designed as a text, with over 100 exercises, to benefit anyone involved in the fields of neural computation and pattern recognition.
Proceedings Article
Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data
TL;DR: This work presents iterative parameter estimation algorithms for conditional random fields and compares the performance of the resulting models to HMMs and MEMMs on synthetic and natural-language data.