scispace - formally typeset
Proceedings ArticleDOI

Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks

TLDR
This paper presents a novel method for training RNNs to label unsegmented sequences directly, thereby solving both problems of sequence learning and post-processing.
Abstract
Many real-world sequence learning tasks require the prediction of sequences of labels from noisy, unsegmented input data. In speech recognition, for example, an acoustic signal is transcribed into words or sub-word units. Recurrent neural networks (RNNs) are powerful sequence learners that would seem well suited to such tasks. However, because they require pre-segmented training data, and post-processing to transform their outputs into label sequences, their applicability has so far been limited. This paper presents a novel method for training RNNs to label unsegmented sequences directly, thereby solving both problems. An experiment on the TIMIT speech corpus demonstrates its advantages over both a baseline HMM and a hybrid HMM-RNN.

read more

Citations
More filters
Posted Content

Improving Generalization of Transformer for Speech Recognition with Parallel Schedule Sampling and Relative Positional Embedding.

TL;DR: This work proposes to use parallel schedule sampling (PSS) and relative positional embedding (RPE) to help transformer generalize to unseen data and achieves 7% relative improvement for short utterances and 30% absolute gains for long utterances on a 10,000-hour ASR task.
Journal ArticleDOI

Describing Multimedia Content using Attention-based Encoder--Decoder Networks

TL;DR: In this article, the authors describe systems that learn to attend to different places in the input, for each element of the output, for a variety of tasks: machine translation, image caption generation, video clip description, and speech recognition, based on a shared set of building blocks: gated recurrent neural networks and convolutional neural networks, along with trained attention mechanisms.
Posted Content

DARTS-ASR: Differentiable Architecture Search for Multilingual Speech Recognition and Adaptation

TL;DR: This paper proposes an ASR approach with efficient gradient-based architecture search, DARTS-ASR, and applies this approach not only on many languages to perform monolingual ASR, but also on a multilingual ASR setting.
Proceedings ArticleDOI

LipType: A Silent Speech Recognizer Augmented with an Independent Repair Model

TL;DR: In this article, an optimized version of LipNet for improved speed and accuracy is developed, and an independent repair model that processes video input for poor lighting conditions, when applicable, and corrects potential errors in output for increased accuracy.
Proceedings ArticleDOI

An Analysis of Local Monotonic Attention Variants.

TL;DR: A simple technique to implement windowed attention, which can be applied on top of an existing global attention model, and it is shown that the proposed model can be trained from random initialization and achieve results comparable to the global attention baseline.
References
More filters
Journal ArticleDOI

Long short-term memory

TL;DR: A novel, efficient, gradient based method called long short-term memory (LSTM) is introduced, which can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units.
Journal ArticleDOI

A tutorial on hidden Markov models and selected applications in speech recognition

TL;DR: In this paper, the authors provide an overview of the basic theory of hidden Markov models (HMMs) as originated by L.E. Baum and T. Petrie (1966) and give practical details on methods of implementation of the theory along with a description of selected applications of HMMs to distinct problems in speech recognition.
Book

Neural networks for pattern recognition

TL;DR: This is the first comprehensive treatment of feed-forward neural networks from the perspective of statistical pattern recognition, and is designed as a text, with over 100 exercises, to benefit anyone involved in the fields of neural computation and pattern recognition.
Proceedings Article

Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data

TL;DR: This work presents iterative parameter estimation algorithms for conditional random fields and compares the performance of the resulting models to HMMs and MEMMs on synthetic and natural-language data.
Related Papers (5)