scispace - formally typeset
Proceedings ArticleDOI

Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks

TLDR
This paper presents a novel method for training RNNs to label unsegmented sequences directly, thereby solving both problems of sequence learning and post-processing.
Abstract
Many real-world sequence learning tasks require the prediction of sequences of labels from noisy, unsegmented input data. In speech recognition, for example, an acoustic signal is transcribed into words or sub-word units. Recurrent neural networks (RNNs) are powerful sequence learners that would seem well suited to such tasks. However, because they require pre-segmented training data, and post-processing to transform their outputs into label sequences, their applicability has so far been limited. This paper presents a novel method for training RNNs to label unsegmented sequences directly, thereby solving both problems. An experiment on the TIMIT speech corpus demonstrates its advantages over both a baseline HMM and a hybrid HMM-RNN.

read more

Citations
More filters
Proceedings ArticleDOI

Subword Regularization and Beam Search Decoding for End-to-end Automatic Speech Recognition

TL;DR: Overall, it is found that subword regularization improves the performance of both types of ASR systems, with the regularized attention-based model performing best overall.
Book ChapterDOI

Ground Truth Data, Content, Metrics, and Analysis

Scott Krig
TL;DR: This chapter proposes a method and corresponding ground truth dataset for measuring interest point detector response as compared to human visual system response and human expectations and looks at the current state of the art, its best practices, and a survey of available ground truth datasets.
Proceedings ArticleDOI

Leveraging Native Language Information for Improved Accented Speech Recognition.

TL;DR: The proposed MTL model performs better than the pre-training approach and outperforms a baseline model trained simply with English data, and a new setting for MTL in which the secondary task is trained with both English and the native language, using the same output set is suggested.
Proceedings ArticleDOI

Large Margin Neural Language Model

TL;DR: The proposed method aims to enlarge the margin between the “good” and “bad” sentences in a task-specific sense and is trained end-to-end and can be widely applied to tasks that involve re-scoring of generated text.
Posted Content

On the Comparison of Popular End-to-End Models for Large Scale Speech Recognition

TL;DR: In this paper, an empirical comparison of RNN-T, AED, and Transformer-AED models in both non-streaming and streaming modes was conducted, and it was shown that both streaming RNNs-T and AED models can obtain better accuracy than a highly-optimized hybrid model.
References
More filters
Journal ArticleDOI

Long short-term memory

TL;DR: A novel, efficient, gradient based method called long short-term memory (LSTM) is introduced, which can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units.
Journal ArticleDOI

A tutorial on hidden Markov models and selected applications in speech recognition

TL;DR: In this paper, the authors provide an overview of the basic theory of hidden Markov models (HMMs) as originated by L.E. Baum and T. Petrie (1966) and give practical details on methods of implementation of the theory along with a description of selected applications of HMMs to distinct problems in speech recognition.
Book

Neural networks for pattern recognition

TL;DR: This is the first comprehensive treatment of feed-forward neural networks from the perspective of statistical pattern recognition, and is designed as a text, with over 100 exercises, to benefit anyone involved in the fields of neural computation and pattern recognition.
Proceedings Article

Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data

TL;DR: This work presents iterative parameter estimation algorithms for conditional random fields and compares the performance of the resulting models to HMMs and MEMMs on synthetic and natural-language data.
Related Papers (5)