scispace - formally typeset
Search or ask a question
Topic

TIMIT

About: TIMIT is a research topic. Over the lifetime, 1401 publications have been published within this topic receiving 59888 citations. The topic is also known as: TIMIT Acoustic-Phonetic Continuous Speech Corpus.


Papers
More filters
Journal ArticleDOI
TL;DR: A novel technique that could combine methods from these two categories while improving the accuracy of the combined methods while solving the problem of incompatibility is introduced.
Abstract: Two main categories of speech recognition robustness through missing data are spectral imputation and classifier modification. In this paper, we introduce a novel technique that could combine methods from these two categories while improving the accuracy of the combined methods. Methods in these two categories are rarely employed together due to their incompatible structures. Based on our previous work, we propose a technique to solve the problem of incompatibility. The technique is based on the idea of partial restoration of the log-spectrum. We decide to whether restore or estimate a possible range for the missing component. We also propose a method to more effectively employ dynamic features. The combined techniques are a classic spectral imputation method and our previously proposed classifier modification technique, namely spectral variance learning. The experiments show that the proposed technique is able to improve the accuracies of both combined techniques significantly, leading to improvements in recognition accuracy as high as nearly four percent on Aurora 2.0 data and more than two percent on a noisy version of TIMIT data.

3 citations

Proceedings ArticleDOI
01 Jul 2019
TL;DR: In this article, the authors proposed Deep Vocoder, a direct end-to-end low bit rate speech compression method with deep autoencoder (DAE) for extracting the latent representing features (LRFs) of speech, which are then efficiently quantized by an analysis-by-synthesis vector quantization (AbS VQ) method.
Abstract: Inspired by the success of deep neural networks (DNNs) in speech processing, this paper presents Deep Vocoder, a direct end-to-end low bit rate speech compression method with deep autoencoder (DAE). In Deep Vocoder, DAE is used for extracting the latent representing features (LRFs) of speech, which are then efficiently quantized by an analysis-by-synthesis vector quantization (AbS VQ) method. AbS VQ aims to minimize the perceptual spectral reconstruction distortion rather than the distortion of LRFs vector itself. Also, a suboptimal codebook searching technique is proposed to further reduce the computational complexity. Experimental results demonstrate that Deep Vocoder yields substantial improvements in terms of frequency-weighted segmental SNR, STOI and PESQ score when compared to the output of the conventional SQ-or VQ-based codec. The yielded PESQ score over the TIMIT corpus is 3.34 and 3.08 for speech coding at 2400 bit/s and 1200 bit/s, respectively.

3 citations

Journal ArticleDOI
Ha-Jin Yu1, Y.-H. Oh1
TL;DR: A non-uniform unit which can model phoneme variations caused by co-articulation spread over several phonemes and between words is introduced to neural networks for speaker-independent continuous speech recognition.

3 citations

Proceedings ArticleDOI
01 Oct 2013
TL;DR: The proposed joint optimization framework for learning the parameters of acoustic and language models using minimum classification error criterion can achieve significant reduction in phone, word and sentence error rates on both TIMIT and RM1 when compared with conventional parameter estimation approaches.
Abstract: In traditional models of speech recognition, acoustic and language models are treated in independence and usually estimated separately, which may yield a suboptimal recognition performance. In this paper, we propose a joint optimization framework for learning the parameters of acoustic and language models using minimum classification error criterion. The joint optimization is performed in terms of a decoding graph constructed using weighted finite-state transducers based on context-dependent hidden Markov models and trigram language models. To emphasize the effectiveness of the proposed framework, two speech corpora, TIMIT and Resource Management (RM1), are incorporated in the conducted experiments. The preliminary experiments show that the proposed approach can achieve significant reduction in phone, word and sentence error rates on both TIMIT and RM1 when compared with conventional parameter estimation approaches.

3 citations

Proceedings ArticleDOI
22 Sep 2008
TL;DR: On the noisy TIMIT task, it is found that the acoustic and phonetic segmentation approaches offer significant improvements over two baseline methods used in the SUMMIT segment-based speech recognizer, a sinusoidal model method and a spectral change approach.
Abstract: In this paper, we compare speech recognition performance using broad phoneticallyand acoustically-motivated units as a pre-processor in designing a novel noise robust landmark detection and segmentation algorithm. We introduce a cluster evaluation method to measure acoustic unit cluster quality. On the noisy TIMIT task, we find that the acoustic and phonetic segmentation approaches offer significant improvements over two baseline methods used in the SUMMIT segment-based speech recognizer, a sinusoidal model method and a spectral change approach. In addition, we find that the acoustic method has much faster computation time in stationary noises, while the phonetic approach is faster in non-stationary noise conditions.

3 citations


Network Information
Related Topics (5)
Recurrent neural network
29.2K papers, 890K citations
76% related
Feature (machine learning)
33.9K papers, 798.7K citations
75% related
Feature vector
48.8K papers, 954.4K citations
74% related
Natural language
31.1K papers, 806.8K citations
73% related
Deep learning
79.8K papers, 2.1M citations
72% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202324
202262
202167
202086
201977
201895