scispace - formally typeset
Proceedings ArticleDOI

Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks

TLDR
This paper presents a novel method for training RNNs to label unsegmented sequences directly, thereby solving both problems of sequence learning and post-processing.
Abstract
Many real-world sequence learning tasks require the prediction of sequences of labels from noisy, unsegmented input data. In speech recognition, for example, an acoustic signal is transcribed into words or sub-word units. Recurrent neural networks (RNNs) are powerful sequence learners that would seem well suited to such tasks. However, because they require pre-segmented training data, and post-processing to transform their outputs into label sequences, their applicability has so far been limited. This paper presents a novel method for training RNNs to label unsegmented sequences directly, thereby solving both problems. An experiment on the TIMIT speech corpus demonstrates its advantages over both a baseline HMM and a hybrid HMM-RNN.

read more

Citations
More filters
Proceedings ArticleDOI

End-to-End Speech Emotion Recognition Combined with Acoustic-to-Word ASR Model.

TL;DR: This paper proposes speech emotion recognition (SER) combined with an acoustic-to-word automatic speech recognition (ASR) model, which has achieved a 68.63% weighted accuracy and 69.67% unweighted accuracy on the IEMOCAP database, which is state-of-the-art performance.
Proceedings ArticleDOI

Exploiting Depth and Highway Connections in Convolutional Recurrent Deep Neural Networks for Speech Recognition.

TL;DR: The CLDNN model is extended by introducing a highway connection between LSTM layers, which enables direct information flow from cells of lower layers to cells of upper layers, and is able to better exploit the advantages of a deeper structure.
Proceedings Article

Glow-TTS: A Generative Flow for Text-to-Speech via Monotonic Alignment Search

TL;DR: Glow-TTS is proposed, a flow-based generative model for parallel TTS that does not require any external aligner and obtains an order-of-magnitude speed-up over the autoregressive model, Tacotron 2, at synthesis with comparable speech quality.
Proceedings ArticleDOI

Domain Adaptation via Teacher-Student Learning for End-to-End Speech Recognition

TL;DR: This work extends the T/S learning to large-scale unsupervised domain adaptation of an attention-based end-to-end (E2E) model through two levels of knowledge transfer: teacher's token posteriors as soft labels and one-best predictions as decoder guidance.
Posted Content

Attention based on-device streaming speech recognition with large speech corpus

TL;DR: This paper presents a new on-device automatic speech recognition (ASR) system based on monotonic chunk-wise attention (MoChA) models trained with large (> 10K hours) corpus that attained around 90% of a word recognition rate for general domain mainly by using joint training of connectionist temporal classifier and cross entropy losses.
References
More filters
Journal ArticleDOI

Long short-term memory

TL;DR: A novel, efficient, gradient based method called long short-term memory (LSTM) is introduced, which can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units.
Journal ArticleDOI

A tutorial on hidden Markov models and selected applications in speech recognition

TL;DR: In this paper, the authors provide an overview of the basic theory of hidden Markov models (HMMs) as originated by L.E. Baum and T. Petrie (1966) and give practical details on methods of implementation of the theory along with a description of selected applications of HMMs to distinct problems in speech recognition.
Book

Neural networks for pattern recognition

TL;DR: This is the first comprehensive treatment of feed-forward neural networks from the perspective of statistical pattern recognition, and is designed as a text, with over 100 exercises, to benefit anyone involved in the fields of neural computation and pattern recognition.
Proceedings Article

Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data

TL;DR: This work presents iterative parameter estimation algorithms for conditional random fields and compares the performance of the resulting models to HMMs and MEMMs on synthetic and natural-language data.
Related Papers (5)