scispace - formally typeset
Proceedings ArticleDOI

Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks

TLDR
This paper presents a novel method for training RNNs to label unsegmented sequences directly, thereby solving both problems of sequence learning and post-processing.
Abstract
Many real-world sequence learning tasks require the prediction of sequences of labels from noisy, unsegmented input data. In speech recognition, for example, an acoustic signal is transcribed into words or sub-word units. Recurrent neural networks (RNNs) are powerful sequence learners that would seem well suited to such tasks. However, because they require pre-segmented training data, and post-processing to transform their outputs into label sequences, their applicability has so far been limited. This paper presents a novel method for training RNNs to label unsegmented sequences directly, thereby solving both problems. An experiment on the TIMIT speech corpus demonstrates its advantages over both a baseline HMM and a hybrid HMM-RNN.

read more

Citations
More filters
Journal ArticleDOI

Street View Text Recognition With Deep Learning for Urban Scene Understanding in Intelligent Transportation Systems

TL;DR: It is demonstrated that language has a critical influence on scene text detection and by comparing the accuracy of four scene text recognition algorithms, there is a very large room for further improvements in street view text recognition to fit real-world ITS applications.
Proceedings ArticleDOI

Exploring ROI size in deep learning based lipreading.

TL;DR: The results show that ROI design choices affect automatic speechreading performance significantly: the best visual-only word error rate corresponds to a ROI that contains a large part of the lower face, in addition to just the mouth, and at a relatively high resolution.
Journal ArticleDOI

Automatic brain labeling via multi-atlas guided fully convolutional networks.

TL;DR: The proposed multi‐atlas guided fully convolutional network (MA‐FCN) aims at further improving the labeling performance with the aid of prior knowledge from the training atlases, by significantly outperforming the conventional FCN and several state‐of‐the‐art MR brain labeling methods.
Journal ArticleDOI

Bidirectional Grid Long Short-Term Memory (BiGridLSTM): A Method to Address Context-Sensitivity and Vanishing Gradient

Hongxiao Fei, +1 more
- 30 Oct 2018 - 
TL;DR: This paper proposes a method that takes into account context-sensitivity and gradient problems, namely the Bidirectional Grid Long Short-Term Memory (BiGridLSTM) recurrent neural network, which not only takes advantage of the grid architecture, but it also captures information around the current moment.
Proceedings ArticleDOI

How Accents Confound: Probing for Accent Information in End-to-End Speech Recognition Systems

TL;DR: This work uses a state-of-the-art end-to-end ASR system that is trained on a large amount of US-accented English speech, and examines the effects of accent on the internal representation using three main probing techniques.
References
More filters
Journal ArticleDOI

Long short-term memory

TL;DR: A novel, efficient, gradient based method called long short-term memory (LSTM) is introduced, which can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units.
Journal ArticleDOI

A tutorial on hidden Markov models and selected applications in speech recognition

TL;DR: In this paper, the authors provide an overview of the basic theory of hidden Markov models (HMMs) as originated by L.E. Baum and T. Petrie (1966) and give practical details on methods of implementation of the theory along with a description of selected applications of HMMs to distinct problems in speech recognition.
Book

Neural networks for pattern recognition

TL;DR: This is the first comprehensive treatment of feed-forward neural networks from the perspective of statistical pattern recognition, and is designed as a text, with over 100 exercises, to benefit anyone involved in the fields of neural computation and pattern recognition.
Proceedings Article

Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data

TL;DR: This work presents iterative parameter estimation algorithms for conditional random fields and compares the performance of the resulting models to HMMs and MEMMs on synthetic and natural-language data.
Related Papers (5)