scispace - formally typeset
Proceedings ArticleDOI

Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks

TLDR
This paper presents a novel method for training RNNs to label unsegmented sequences directly, thereby solving both problems of sequence learning and post-processing.
Abstract
Many real-world sequence learning tasks require the prediction of sequences of labels from noisy, unsegmented input data. In speech recognition, for example, an acoustic signal is transcribed into words or sub-word units. Recurrent neural networks (RNNs) are powerful sequence learners that would seem well suited to such tasks. However, because they require pre-segmented training data, and post-processing to transform their outputs into label sequences, their applicability has so far been limited. This paper presents a novel method for training RNNs to label unsegmented sequences directly, thereby solving both problems. An experiment on the TIMIT speech corpus demonstrates its advantages over both a baseline HMM and a hybrid HMM-RNN.

read more

Citations
More filters
Journal ArticleDOI

Pair consensus decoding improves accuracy of neural network basecallers for nanopore sequencing

TL;DR: PoreOver as mentioned in this paper improves the accuracy of base calling with Oxford Nanopore's 1D2 and related sequencing protocols by aligning their probability profiles, and is compatible with multiple nanopore basecallers.
Journal ArticleDOI

Learning Deep Representations for Video-Based Intake Gesture Detection

TL;DR: In this article, a deep learning architecture is applied to the problem of video-based detection of intake gestures during eating occasions, and the best model achieves an $F_1$ score of 0.858, appearance features contribute more than motion features, and temporal context in form of multiple video frames is essential for top model performance.
Proceedings ArticleDOI

Speaker Adaptation for Attention-Based End-to-End Speech Recognition

TL;DR: Three regularization-based speaker adaptation approaches to adapt the attention-based encoder-decoder (AED) model with very limited adaptation data from target speakers for end-to-end automatic speech recognition are proposed.
Journal ArticleDOI

Deep Belief Neural Networks and Bidirectional Long-Short Term Memory Hybrid for Speech Recognition

TL;DR: Results show that the use of the new DBNN-BLSTM hybrid as the acoustic model for the Large Vocabulary Continuous Speech Recognition (LVCSR) increases word recognition accuracy and has many parameters and in some cases it may suffer performance issues in real-time applications.
Posted Content

Self-Delimiting Neural Networks

TL;DR: To apply AOPS to (possibly recurrent) neural networks (NNs) and to efficiently teach a SLIM NN to solve many tasks, each connection keeps a list of tasks it is used for, which may be efficiently updated during training.
References
More filters
Journal ArticleDOI

Long short-term memory

TL;DR: A novel, efficient, gradient based method called long short-term memory (LSTM) is introduced, which can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units.
Journal ArticleDOI

A tutorial on hidden Markov models and selected applications in speech recognition

TL;DR: In this paper, the authors provide an overview of the basic theory of hidden Markov models (HMMs) as originated by L.E. Baum and T. Petrie (1966) and give practical details on methods of implementation of the theory along with a description of selected applications of HMMs to distinct problems in speech recognition.
Book

Neural networks for pattern recognition

TL;DR: This is the first comprehensive treatment of feed-forward neural networks from the perspective of statistical pattern recognition, and is designed as a text, with over 100 exercises, to benefit anyone involved in the fields of neural computation and pattern recognition.
Proceedings Article

Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data

TL;DR: This work presents iterative parameter estimation algorithms for conditional random fields and compares the performance of the resulting models to HMMs and MEMMs on synthetic and natural-language data.
Related Papers (5)