Open AccessPosted Content
Towards Building ASR Systems for the Next Billion Users.
Tahir Javed,Sumanth Doddapaneni,Abhigyan Raman,Kaushal Santosh Bhogale,G. Ramesh,Anoop Kunchukuttan,Pratyush Kumar,Mitesh M. Khapra +7 more
TLDR
This article used 17,000 hours of raw speech data for 40 Indian languages from a wide variety of domains including education, news, technology, and finance to build ASR systems for low resource languages from the Indian subcontinent.Abstract:
Recent methods in speech and language technology pretrain very LARGE models which are fine-tuned for specific tasks. However, the benefits of such LARGE models are often limited to a few resource rich languages of the world. In this work, we make multiple contributions towards building ASR systems for low resource languages from the Indian subcontinent. First, we curate 17,000 hours of raw speech data for 40 Indian languages from a wide variety of domains including education, news, technology, and finance. Second, using this raw speech data we pretrain several variants of wav2vec style models for 40 Indian languages. Third, we analyze the pretrained models to find key features: codebook vectors of similar sounding phonemes are shared across languages, representations across layers are discriminative of the language family, and attention heads often pay attention within small local windows. Fourth, we fine-tune this model for downstream ASR for 9 languages and obtain state-of-the-art results on 3 public datasets, including on very low-resource languages such as Sinhala and Nepali. Our work establishes that multilingual pretraining is an effective strategy for building ASR systems for the linguistically diverse speakers of the Indian subcontinent.read more
Citations
More filters
Proceedings ArticleDOI
Exploring a Unified ASR for Multiple South Indian Languages Leveraging Multilingual Acoustic and Language Models
TL;DR: In this paper , a single automatic speech recognition (ASR) model for several south Indian languages using a common set of intermediary labels, which can be easily mapped to the desired native script through simple lookup tables and a few rules.
Proceedings ArticleDOI
Accidental Learners: Spoken Language Identification in Multilingual Self-Supervised Models
TL;DR: In this paper , a pre-trained Conformer model was used for language identification in a multilingual pre-training paradigm, which achieved state-of-the-art results.
Proceedings ArticleDOI
Effectiveness of Mining Audio and Text Pairs from Public Data for Improving ASR Systems for Low-Resource Languages
TL;DR: Shrutilipi as discussed by the authors is a dataset which contains over 6,400 hours of labelled audio across 12 Indian languages and 3.3M sentences for ASR for low-resource languages.
Book ChapterDOI
A Multi-modal Approach to Mining Intent from Code-Mixed Hindi-English Calls in the Hyperlocal-Delivery Domain
Jose Mathew,Pranjal Sahu,Bhavuk Singhal,Aniket Joshi,Krishna Reddy Medikonda,Jairaj Sathyanarayana +5 more
Proceedings ArticleDOI
Towards Building Text-to-Speech Systems for the Next Billion Users
TL;DR: In this paper , the authors evaluate the choice of acoustic models, vocoders, supplementary loss functions, training schedules, and speaker and language diversity for Dravidian and Indo-Aryan languages.
References
More filters
Proceedings ArticleDOI
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
TL;DR: BERT as mentioned in this paper pre-trains deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers, which can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks.
Proceedings ArticleDOI
Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks
TL;DR: This paper presents a novel method for training RNNs to label unsegmented sequences directly, thereby solving both problems of sequence learning and post-processing.
Proceedings ArticleDOI
Librispeech: An ASR corpus based on public domain audio books
TL;DR: It is shown that acoustic models trained on LibriSpeech give lower error rate on the Wall Street Journal (WSJ) test sets than models training on WSJ itself.
Proceedings ArticleDOI
Unsupervised Cross-lingual Representation Learning at Scale
Alexis Conneau,Kartikay Khandelwal,Naman Goyal,Vishrav Chaudhary,Guillaume Wenzek,Francisco Guzmán,Edouard Grave,Myle Ott,Luke Zettlemoyer,Veselin Stoyanov +9 more
TL;DR: It is shown that pretraining multilingual language models at scale leads to significant performance gains for a wide range of cross-lingual transfer tasks, and the possibility of multilingual modeling without sacrificing per-language performance is shown for the first time.
Proceedings ArticleDOI
SpecAugment: A Simple Data Augmentation Method for Automatic Speech Recognition
TL;DR: This work presents SpecAugment, a simple data augmentation method for speech recognition that is applied directly to the feature inputs of a neural network (i.e., filter bank coefficients) and achieves state-of-the-art performance on the LibriSpeech 960h and Swichboard 300h tasks, outperforming all prior work.
Related Papers (5)
Multilingual and code-switching ASR challenges for low resource Indian languages.
Anuj Diwan,Rakesh Vaideeswaran,Sanket Shah,Ankita Singh,K M Srinivasa Raghavan,Shreya Khare,Vinit Unni,Saurabh Vyas,Akash Rajpuria,Chiranjeevi Yarra,Ashish Mittal,Prasanta Kumar Ghosh,Preethi Jyothi,Kalika Bali,Vivek Seshadri,Sunayana Sitaram,Samarth Bharadwaj,Jai Nanavati,Raoul Nanavati,Karthik Sankaranarayanan,Tejaswi Seeram,Basil Abraham +21 more