scispace - formally typeset
Open AccessPosted Content

Towards Building ASR Systems for the Next Billion Users.

TLDR
This article used 17,000 hours of raw speech data for 40 Indian languages from a wide variety of domains including education, news, technology, and finance to build ASR systems for low resource languages from the Indian subcontinent.
Abstract
Recent methods in speech and language technology pretrain very LARGE models which are fine-tuned for specific tasks. However, the benefits of such LARGE models are often limited to a few resource rich languages of the world. In this work, we make multiple contributions towards building ASR systems for low resource languages from the Indian subcontinent. First, we curate 17,000 hours of raw speech data for 40 Indian languages from a wide variety of domains including education, news, technology, and finance. Second, using this raw speech data we pretrain several variants of wav2vec style models for 40 Indian languages. Third, we analyze the pretrained models to find key features: codebook vectors of similar sounding phonemes are shared across languages, representations across layers are discriminative of the language family, and attention heads often pay attention within small local windows. Fourth, we fine-tune this model for downstream ASR for 9 languages and obtain state-of-the-art results on 3 public datasets, including on very low-resource languages such as Sinhala and Nepali. Our work establishes that multilingual pretraining is an effective strategy for building ASR systems for the linguistically diverse speakers of the Indian subcontinent.

read more

Citations
More filters
Proceedings ArticleDOI

Exploring a Unified ASR for Multiple South Indian Languages Leveraging Multilingual Acoustic and Language Models

TL;DR: In this paper , a single automatic speech recognition (ASR) model for several south Indian languages using a common set of intermediary labels, which can be easily mapped to the desired native script through simple lookup tables and a few rules.
Proceedings ArticleDOI

Accidental Learners: Spoken Language Identification in Multilingual Self-Supervised Models

TL;DR: In this paper , a pre-trained Conformer model was used for language identification in a multilingual pre-training paradigm, which achieved state-of-the-art results.
Proceedings ArticleDOI

Effectiveness of Mining Audio and Text Pairs from Public Data for Improving ASR Systems for Low-Resource Languages

TL;DR: Shrutilipi as discussed by the authors is a dataset which contains over 6,400 hours of labelled audio across 12 Indian languages and 3.3M sentences for ASR for low-resource languages.
Proceedings ArticleDOI

Towards Building Text-to-Speech Systems for the Next Billion Users

TL;DR: In this paper , the authors evaluate the choice of acoustic models, vocoders, supplementary loss functions, training schedules, and speaker and language diversity for Dravidian and Indo-Aryan languages.
References
More filters
Proceedings ArticleDOI

BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding

TL;DR: BERT as mentioned in this paper pre-trains deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers, which can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks.
Proceedings ArticleDOI

Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks

TL;DR: This paper presents a novel method for training RNNs to label unsegmented sequences directly, thereby solving both problems of sequence learning and post-processing.
Proceedings ArticleDOI

Librispeech: An ASR corpus based on public domain audio books

TL;DR: It is shown that acoustic models trained on LibriSpeech give lower error rate on the Wall Street Journal (WSJ) test sets than models training on WSJ itself.
Proceedings ArticleDOI

Unsupervised Cross-lingual Representation Learning at Scale

TL;DR: It is shown that pretraining multilingual language models at scale leads to significant performance gains for a wide range of cross-lingual transfer tasks, and the possibility of multilingual modeling without sacrificing per-language performance is shown for the first time.
Proceedings ArticleDOI

SpecAugment: A Simple Data Augmentation Method for Automatic Speech Recognition

TL;DR: This work presents SpecAugment, a simple data augmentation method for speech recognition that is applied directly to the feature inputs of a neural network (i.e., filter bank coefficients) and achieves state-of-the-art performance on the LibriSpeech 960h and Swichboard 300h tasks, outperforming all prior work.
Related Papers (5)
Trending Questions (1)
How many hours of annotated audio is required to build an asr system for an under-resourced language?

17,000 hours of raw speech data for 40 Indian languages were curated to build ASR systems for low resource languages.