scispace - formally typeset
Search or ask a question
Topic

TIMIT

About: TIMIT is a research topic. Over the lifetime, 1401 publications have been published within this topic receiving 59888 citations. The topic is also known as: TIMIT Acoustic-Phonetic Continuous Speech Corpus.


Papers
More filters
Posted Content
TL;DR: An expectation-maximization (EM) based online CTC algorithm is introduced that enables unidirectional RNNs to learn sequences that are longer than the amount of unrolling and can also be trained to process an infinitely long input sequence without pre-segmentation or external reset.
Abstract: Connectionist temporal classification (CTC) based supervised sequence training of recurrent neural networks (RNNs) has shown great success in many machine learning areas including end-to-end speech and handwritten character recognition. For the CTC training, however, it is required to unroll (or unfold) the RNN by the length of an input sequence. This unrolling requires a lot of memory and hinders a small footprint implementation of online learning or adaptation. Furthermore, the length of training sequences is usually not uniform, which makes parallel training with multiple sequences inefficient on shared memory models such as graphics processing units (GPUs). In this work, we introduce an expectation-maximization (EM) based online CTC algorithm that enables unidirectional RNNs to learn sequences that are longer than the amount of unrolling. The RNNs can also be trained to process an infinitely long input sequence without pre-segmentation or external reset. Moreover, the proposed approach allows efficient parallel training on GPUs. For evaluation, phoneme recognition and end-to-end speech recognition examples are presented on the TIMIT and Wall Street Journal (WSJ) corpora, respectively. Our online model achieves 20.7% phoneme error rate (PER) on the very long input sequence that is generated by concatenating all 192 utterances in the TIMIT core test set. On WSJ, a network can be trained with only 64 times of unrolling while sacrificing 4.5% relative word error rate (WER).

9 citations

Book ChapterDOI
12 Aug 2002
TL;DR: This paper compares two approaches in using HMMs (hidden Markov models) to convert audio signals to a sequence of visemes, which are the generic face images corresponding to particular sounds.
Abstract: Visual images synchronized with audio signals can provide user-friendly interface for man machine interactions. The visual speech can be represented as a sequence of visemes, which are the generic face images corresponding to particular sounds. We use HMMs (hidden Markov models) to convert audio signals to a sequence of visemes. In this paper, we compare two approaches in using HMMs. In the first approach, an HMM is trained for each triviseme which is a viseme with its left and right context, and the audio signals are directly recognized as a sequence of trivisemes. In the second approach, each triphone is modeled with an HMM, and a general triphone recognizer is used to produce a triphone sequence from the audio signals. The triviseme or triphone sequence is then converted to a viseme sequence. The performances of the two viseme recognition systems are evaluated on the TIMIT speech corpus.

9 citations

Proceedings ArticleDOI
17 May 2004
TL;DR: The results show that the identification performance can be improved greatly when the training data is limited and a new selection strategy of competitive speakers is proposed to it.
Abstract: In this paper we apply the maximum model distance (MMD) training to speaker identification and a new selection strategy of competitive speakers is proposed to it. The traditional ML method only utilizes the utterances for each speaker model, which probably leads to a local optimization solution. By maximizing the dissimilarities among those similar speaker models, MMD could add the discriminative capability into the training procedure and then improve the identification performance. Based on the TIMIT corpus, we designed the word and sentence experiments to evaluate this proposed training approach. The results show that the identification performance can be improved greatly when the training data is limited.

9 citations

Proceedings ArticleDOI
04 Aug 2002
TL;DR: The proposed partial-correlation (PARCOR) coefficients scheme to model the cross areas of the several cylinders from the vocal tract can yield better identification performance than the conventional approach.
Abstract: In this work, we propose the partial-correlation (PARCOR) coefficients scheme to model the cross areas of the several cylinders from the vocal tract. By using the relationship of the acoustic impedance proportional to the reciprocal of cross areas, the ratios of cross areas between each neighboring cylinders are used to model a speaker's vocal tract. The autoregressive model (AR model) is performed on the speech residual signals, that are produced from the inverse vocal tract transform based on the PARCOR, to generate features. These features with the conventional features from the Mel-Frequency Cepstral Coefficient (MFCC) are used for the identification engine of the Gaussian Mixture Model (GMM). According to our computer analyses in the TIMIT speech database, the proposed system can yield better identification performance than the conventional approach.

9 citations

Posted Content
TL;DR: In this article, a Generative Adversarial Network (GAN) was proposed to generate a large annotated speech corpus for training ASR systems, in which a generator and discriminator learn from each other iteratively to improve the performance.
Abstract: Producing a large annotated speech corpus for training ASR systems remains difficult for more than 95% of languages all over the world which are low-resourced, but collecting a relatively big unlabeled data set for such languages is more achievable. This is why some initial effort have been reported on completely unsupervised speech recognition learned from unlabeled data only, although with relatively high error rates. In this paper, we develop a Generative Adversarial Network (GAN) to achieve this purpose, in which a Generator and a Discriminator learn from each other iteratively to improve the performance. We further use a set of Hidden Markov Models (HMMs) iteratively refined from the machine generated labels to work in harmony with the GAN. The initial experiments on TIMIT data set achieve an phone error rate of 33.1%, which is 8.5% lower than the previous state-of-the-art.

9 citations


Network Information
Related Topics (5)
Recurrent neural network
29.2K papers, 890K citations
76% related
Feature (machine learning)
33.9K papers, 798.7K citations
75% related
Feature vector
48.8K papers, 954.4K citations
74% related
Natural language
31.1K papers, 806.8K citations
73% related
Deep learning
79.8K papers, 2.1M citations
72% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202324
202262
202167
202086
201977
201895