scispace - formally typeset
Proceedings ArticleDOI

Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks

TLDR
This paper presents a novel method for training RNNs to label unsegmented sequences directly, thereby solving both problems of sequence learning and post-processing.
Abstract
Many real-world sequence learning tasks require the prediction of sequences of labels from noisy, unsegmented input data. In speech recognition, for example, an acoustic signal is transcribed into words or sub-word units. Recurrent neural networks (RNNs) are powerful sequence learners that would seem well suited to such tasks. However, because they require pre-segmented training data, and post-processing to transform their outputs into label sequences, their applicability has so far been limited. This paper presents a novel method for training RNNs to label unsegmented sequences directly, thereby solving both problems. An experiment on the TIMIT speech corpus demonstrates its advantages over both a baseline HMM and a hybrid HMM-RNN.

read more

Citations
More filters
Dissertation

On internal language representations in deep learning : an analysis of machine translation and speech recognition

TL;DR: A unified methodology for evaluating internal representations in neural networks, consisting of three steps: training a model on a complex end-to-end task; generating feature representations from different parts of the trained model; and training classifiers on simple supervised learning tasks using the representations.
Proceedings ArticleDOI

Listen Attentively, and Spell Once: Whole Sentence Generation via a Non-Autoregressive Architecture for Low-Latency Speech Recognition.

TL;DR: Li et al. as discussed by the authors proposed a non-autoregressive end-to-end speech recognition system called LASO (listen attentively, and spell once) to predict a textual token in the sequence without the dependence on other tokens.
Proceedings ArticleDOI

ICDAR 2015 competition HTRtS: Handwritten Text Recognition on the tranScriptorium dataset

TL;DR: This paper describes the second edition of the Handwritten Text Recognition (HTR) contest on the tranScriptorium datasets that has been held in the context of the International Conference on Document Analysis and Recognition 2015.
Proceedings ArticleDOI

A Compact CNN-DBLSTM Based Character Model for Offline Handwriting Recognition with Tucker Decomposition

TL;DR: The results show that using Tucker decomposition alone offers a good solution to building a compact CNN-DBLSTM model which can reduce significantly both the footprint and latency yet without degrading recognition accuracy.
Proceedings ArticleDOI

End-to-End Speech Command Recognition with Capsule Network

TL;DR: This work proposes an end-to-end SR system with capsule networks to capture the spatial relationship and pose information of speech spectrogram features in both frequency and time axes and shows that the proposed system achieves better results on both clean and noise-added test than baseline CNN models.
References
More filters
Journal ArticleDOI

Long short-term memory

TL;DR: A novel, efficient, gradient based method called long short-term memory (LSTM) is introduced, which can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units.
Journal ArticleDOI

A tutorial on hidden Markov models and selected applications in speech recognition

TL;DR: In this paper, the authors provide an overview of the basic theory of hidden Markov models (HMMs) as originated by L.E. Baum and T. Petrie (1966) and give practical details on methods of implementation of the theory along with a description of selected applications of HMMs to distinct problems in speech recognition.
Book

Neural networks for pattern recognition

TL;DR: This is the first comprehensive treatment of feed-forward neural networks from the perspective of statistical pattern recognition, and is designed as a text, with over 100 exercises, to benefit anyone involved in the fields of neural computation and pattern recognition.
Proceedings Article

Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data

TL;DR: This work presents iterative parameter estimation algorithms for conditional random fields and compares the performance of the resulting models to HMMs and MEMMs on synthetic and natural-language data.
Related Papers (5)