scispace - formally typeset
Open AccessProceedings ArticleDOI

Audio-Visual Speech Recognition with a Hybrid CTC/Attention Architecture

TLDR
In this article, the authors proposed a hybrid CTC/attention architecture for audio-visual recognition of speech in the wild, which leads to an 1.3% absolute decrease in word error rate over the audio-only model and achieves the state-of-the-art performance on LRS2 database.
Abstract: 
Recent works in speech recognition rely either on connectionist temporal classification (CTC) or sequence-to-sequence models for character-level recognition. CTC assumes conditional independence of individual characters, whereas attention-based models can provide nonsequential alignments. Therefore, we could use a CTC loss in combination with an attention-based model in order to force monotonic alignments and at the same time get rid of the conditional independence assumption. In this paper, we use the recently proposed hybrid CTC/attention architecture for audio-visual recognition of speech in-the-wild. To the best of our knowledge, this is the first time that such a hybrid architecture architecture is used for audio-visual recognition of speech. We use the LRS2 database and show that the proposed audio-visual model leads to an 1.3% absolute decrease in word error rate over the audio-only model and achieves the new state-of-the-art performance on LRS2 database (7% word error rate). We also observe that the audio-visual model significantly outperforms the audio-based model (up to 32.9% absolute improvement in word error rate) for several different types of noise as the signal-to-noise ratio decreases.

read more

Citations
More filters
Proceedings ArticleDOI

End-To-End Audio-Visual Speech Recognition with Conformers

TL;DR: In this article, a hybrid CTC/Attention model based on a ResNet-18 and Convolution-augmented transformer (Conformer) is proposed for sentence-level speech recognition.
Proceedings ArticleDOI

Lipreading Using Temporal Convolutional Networks

TL;DR: In this article, the BGRU layers are replaced with Temporal Convolutional Networks (TCN) and greatly simplified the training procedure, which allows the model to train the model in one single stage.
Posted Content

Lipreading using Temporal Convolutional Networks

TL;DR: It is shown that the current state-of-the-art methodology produces models that do not generalize well to variations on the sequence length, and this work addresses this issue by proposing a variable-length augmentation.
Proceedings ArticleDOI

Recurrent Neural Network Transducer for Audio-Visual Speech Recognition

TL;DR: This work presents a large-scale audio-visual speech recognition system based on a recurrent neural network transducer (RNN-T) architecture and significantly improves the state-of-the-art on the LRS3-TED set.
Journal ArticleDOI

End-to-End Audiovisual Speech Recognition System With Multitask Learning

TL;DR: A novel end-to-end, multitask learning (MTL), audiovisual ASR (AV-ASR) system that considers the temporal dynamics within and across modalities, providing an appealing and practical fusion scheme.
References
More filters
Proceedings Article

Adam: A Method for Stochastic Optimization

TL;DR: This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.
Proceedings ArticleDOI

Librispeech: An ASR corpus based on public domain audio books

TL;DR: It is shown that acoustic models trained on LibriSpeech give lower error rate on the Wall Street Journal (WSJ) test sets than models training on WSJ itself.
Proceedings Article

Multimodal Deep Learning

TL;DR: This work presents a series of tasks for multimodal learning and shows how to train deep networks that learn features to address these tasks, and demonstrates cross modality feature learning, where better features for one modality can be learned if multiple modalities are present at feature learning time.
Book

Speech Enhancement: Theory and Practice

TL;DR: Clear and concise, this book explores how human listeners compensate for acoustic noise in noisy environments and suggests steps that can be taken to realize the full potential of these algorithms under realistic conditions.
Journal ArticleDOI

Assessment for automatic speech recognition II: NOISEX-92: a database and an experiment to study the effect of additive noise on speech recognition systems

TL;DR: NoISEX-92 specifies a carefully controlled experiment on artificially noisy speech data, examining performance for a limited digit recognition task but with a relatively wide range of noises and signal-to-noise ratios.