Proceedings ArticleDOI
Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks
Alex Graves,Santiago Fernández,Faustino Gomez,Jürgen Schmidhuber +3 more
- pp 369-376
TLDR
This paper presents a novel method for training RNNs to label unsegmented sequences directly, thereby solving both problems of sequence learning and post-processing.Abstract:
Many real-world sequence learning tasks require the prediction of sequences of labels from noisy, unsegmented input data. In speech recognition, for example, an acoustic signal is transcribed into words or sub-word units. Recurrent neural networks (RNNs) are powerful sequence learners that would seem well suited to such tasks. However, because they require pre-segmented training data, and post-processing to transform their outputs into label sequences, their applicability has so far been limited. This paper presents a novel method for training RNNs to label unsegmented sequences directly, thereby solving both problems. An experiment on the TIMIT speech corpus demonstrates its advantages over both a baseline HMM and a hybrid HMM-RNN.read more
Citations
More filters
Posted Content
Non-Autoregressive Dialog State Tracking
TL;DR: A novel framework of Non-Autoregressive Dialog State Tracking (NADST) which can factor in potential dependencies among domains and slots to optimize the models towards better prediction of dialogue states as a complete set rather than separate slots is proposed.
Proceedings ArticleDOI
Preparatory KWS Experiments for Large-Scale Indexing of a Vast Medieval Manuscript Collection in the HIMANIS Project
Theodore Bluche,Sébastien Hamel,Christopher Kermorvant,Joan Puigcerver,Dominique Stutzmann,Alejandro Héctor Toselli,Enrique Vidal +6 more
TL;DR: Results confirm the viability of the chosen approach for the large-scale indexing aimed at "Chancery" and show the ability of the proposed modeling and training approaches to properly deal with the abbreviation difficulties mentioned.
Proceedings ArticleDOI
Boosting Handwriting Text Recognition in Small Databases with Transfer Learning
TL;DR: In this article, transfer learning is used to re-train the whole CRNN parameters initialized to the values obtained after the training of the CRNN from a larger database. But the authors focus on which layers of the network could not be re-trained.
Posted Content
Convolutional Character Networks
TL;DR: CharNet as discussed by the authors directly outputs bounding boxes of words and characters, with corresponding character labels, and uses character as basic element, allowing them to overcome the main difficulty of existing approaches that attempted to optimize text detection jointly with a RNN-based recognition branch.
Posted Content
SF-Net: Structured Feature Network for Continuous Sign Language Recognition.
TL;DR: The proposed Structured Feature Network (SF-Net) extracts features in a structured manner and gradually encodes information at the frame level, the gloss level and the sentence level into the feature representation.
References
More filters
Journal ArticleDOI
Long short-term memory
TL;DR: A novel, efficient, gradient based method called long short-term memory (LSTM) is introduced, which can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units.
Journal ArticleDOI
A tutorial on hidden Markov models and selected applications in speech recognition
TL;DR: In this paper, the authors provide an overview of the basic theory of hidden Markov models (HMMs) as originated by L.E. Baum and T. Petrie (1966) and give practical details on methods of implementation of the theory along with a description of selected applications of HMMs to distinct problems in speech recognition.
Book
Neural networks for pattern recognition
TL;DR: This is the first comprehensive treatment of feed-forward neural networks from the perspective of statistical pattern recognition, and is designed as a text, with over 100 exercises, to benefit anyone involved in the fields of neural computation and pattern recognition.
Proceedings Article
Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data
TL;DR: This work presents iterative parameter estimation algorithms for conditional random fields and compares the performance of the resulting models to HMMs and MEMMs on synthetic and natural-language data.