scispace - formally typeset
Proceedings ArticleDOI

Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks

TLDR
This paper presents a novel method for training RNNs to label unsegmented sequences directly, thereby solving both problems of sequence learning and post-processing.
Abstract
Many real-world sequence learning tasks require the prediction of sequences of labels from noisy, unsegmented input data. In speech recognition, for example, an acoustic signal is transcribed into words or sub-word units. Recurrent neural networks (RNNs) are powerful sequence learners that would seem well suited to such tasks. However, because they require pre-segmented training data, and post-processing to transform their outputs into label sequences, their applicability has so far been limited. This paper presents a novel method for training RNNs to label unsegmented sequences directly, thereby solving both problems. An experiment on the TIMIT speech corpus demonstrates its advantages over both a baseline HMM and a hybrid HMM-RNN.

read more

Citations
More filters
Proceedings ArticleDOI

Scene Text Recognition with Permuted Autoregressive Sequence Models

TL;DR: This method, PARSeq, learns an ensemble of internal AR LMs with shared weights using Permutation Language Modeling that unifies context-free non-AR and context-aware AR inference, and iterative refinement using bidirectional context.
Proceedings ArticleDOI

Comparable Study Of Modeling Units For End-To-End Mandarin Speech Recognition

TL;DR: This work explores two major end-to-end models: connectionist temporal classification (CTC) model and attention based encoder-decoder model for Mandarin speech recognition and finds that Chinese character is a reasonable unit for Mandarinspeech recognition.
Proceedings ArticleDOI

Factoring Fact-Checks: Structured Information Extraction from Fact-Checking Articles

TL;DR: The task of factoring fact-checks for automatically extracting structured information from fact-checking articles is proposed as a sequence tagging problem and the performance of the models for well-known fact-checkers and promising initial results for under-represented fact- checkers are demonstrated.
Proceedings ArticleDOI

Padding Methods in Convolutional Sequence Model: An Application in Japanese Handwriting Recognition

TL;DR: Different impact of various padding and non-padding methods on the same model architecture for Japanese handwriting recognition are examined before finally concluding on which method has the most reasonable training time but can produce an accuracy rate of up to 95%.
Proceedings ArticleDOI

End-to-end Dysarthric Speech Recognition Using Multiple Databases

TL;DR: Experimental results show the merit of the proposed approach of using multiple databases for speech recognition, and an end-to-end ASR framework trained by not only the speech data of a Japanese person with an articulation disorder but also the speechData of a physically unimpaired Japanese person and a non-Japanese person withAn articulation Disorder to relieve the lack of training data of an target speaker.
References
More filters
Journal ArticleDOI

Long short-term memory

TL;DR: A novel, efficient, gradient based method called long short-term memory (LSTM) is introduced, which can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units.
Journal ArticleDOI

A tutorial on hidden Markov models and selected applications in speech recognition

TL;DR: In this paper, the authors provide an overview of the basic theory of hidden Markov models (HMMs) as originated by L.E. Baum and T. Petrie (1966) and give practical details on methods of implementation of the theory along with a description of selected applications of HMMs to distinct problems in speech recognition.
Book

Neural networks for pattern recognition

TL;DR: This is the first comprehensive treatment of feed-forward neural networks from the perspective of statistical pattern recognition, and is designed as a text, with over 100 exercises, to benefit anyone involved in the fields of neural computation and pattern recognition.
Proceedings Article

Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data

TL;DR: This work presents iterative parameter estimation algorithms for conditional random fields and compares the performance of the resulting models to HMMs and MEMMs on synthetic and natural-language data.
Related Papers (5)