scispace - formally typeset
Search or ask a question
Topic

TIMIT

About: TIMIT is a research topic. Over the lifetime, 1401 publications have been published within this topic receiving 59888 citations. The topic is also known as: TIMIT Acoustic-Phonetic Continuous Speech Corpus.


Papers
More filters
Journal ArticleDOI
TL;DR: The results show that the best SV accuracy was obtained with the MFCC + pH features fusion, the MS/SS and the Colored-MT, which are based on colored, white and narrow-band noises.
Abstract: This paper investigates the fusion of Mel-frequency cepstral coefficients (MFCC) and statistical pH features to improve the performance of speaker verification (SV) in non-stationary noise conditions. The α-integrated Gaussian Mixture Model (α-GMM) classifier is adopted for speaker modeling. Two different approaches are applied to reduce the effects of noise corruption in the SV task: speech enhancement and multi-style training (MT). The spectral subtraction with minimum statistics (MS/SS) and the optimally-modified log-spectral amplitude with improved minima controlled recursive averaging (IMCRA/OMLSA) are examined for the speech enhancement procedure. The MT techniques are based on colored (Colored-MT), white (White-MT) and narrow-band (Narrow-MT) noises. Six real non-stationary noises, collected from different acoustic sources, are used to corrupt the TIMIT speech database in four different signal-to-noise ratios (SNR). The index of non-stationarity (INS) is chosen for the stationarity tests of the acoustic noises. Complementary SV experiments are conducted in realistic noisy conditions using the MIT database. The results show that the best SV accuracy was obtained with the MFCC + pH features fusion, the MS/SS and the Colored-MT.

18 citations

Proceedings ArticleDOI
05 Jun 2000
TL;DR: The novel approach here is to encode the clean models using principal component analysis (PCA) and pre-compute the prototype vectors and matrices for the means and covariances in the linear spectral-domain using rectangular DCT and inverse DCT matrices.
Abstract: This paper describes an algorithm to reduce computational complexity of the parallel model combination (PMC) method for robust speech recognition while retaining the same level of performance. Although, PMC is effective in composing a noise corrupted acoustic model from clean speech and noise models, the intense computational complexity limits its use in real-time use. The novel approach here is to encode the clean models using principal component analysis (PCA) and pre-compute the prototype vectors and matrices for the means and covariances in the linear spectral-domain using rectangular DCT and inverse DCT matrices. Therefore, transformation into the linear spectral domain is reduced to finding the projection of each vector in the eigen space of means and covariances followed by a linear combination of vectors and matrices obtained from the projections. Furthermore, the eigen space allows a better trade-off for reducing computational complexity versus accuracy. The computational savings are demonstrated both analytically and through experimental evaluations. Experiments using context independent phone recognition with TIMIT data shows that the new PMC framework can outperforms the baseline method by a factor of 1.9 with the same level of accuracy.

18 citations

Journal ArticleDOI
TL;DR: The classical problem of delta feature computation is addressed, and the operation involved in terms of Savitzky-Golay (SG) filtering is interpreted as SG filtering with a fixed impulse response, bringing in significantly lower latency with no loss in accuracy.
Abstract: We address the classical problem of delta feature computation, and interpret the operation involved in terms of Savitzky-Golay (SG) filtering. Features such as the mel-frequency cepstral coefficients (MFCCs), obtained based on short-time spectra of the speech signal, are commonly used in speech recognition tasks. In order to incorporate the dynamics of speech, auxiliary delta and delta-delta features, which are computed as temporal derivatives of the original features, are used. Typically, the delta features are computed in a smooth fashion using local least-squares (LS) polynomial fitting on each feature vector component trajectory. In the light of the original work of Savitzky and Golay, and a recent article by Schafer in IEEE Signal Processing Magazine, we interpret the dynamic feature vector computation for arbitrary derivative orders as SG filtering with a fixed impulse response. This filtering equivalence brings in significantly lower latency with no loss in accuracy, as validated by results on a TIMIT phoneme recognition task. The SG filters involved in dynamic parameter computation can be viewed as modulation filters, proposed by Hermansky.

18 citations

Book ChapterDOI
TL;DR: This work proposes an iterative speaker pruning algorithm for speeding up the identification in the context of real-time systems by dropping out unlikely speakers as more data arrives into the processing buffer.
Abstract: Speaker identification is a computationally expensive task. In this work, we propose an iterative speaker pruning algorithm for speeding up the identification in the context of real-time systems. The proposed algorithm reduces computational load by dropping out unlikely speakers as more data arrives into the processing buffer. The process is repeated until there is just one speaker left in the candidate set. Care must be taken in designing the pruning heuristics, so that the correct speaker will not be pruned. Two variants of the pruning algorithm are presented, and simulations with TIMIT corpus show that an error rate of 10 % can be achieved in 10 seconds for 630 speakers.

18 citations

Proceedings ArticleDOI
19 Apr 2015
TL;DR: The results reveal that cochleogram-spectrogram feature combination provides significant advantages and was evaluated in the framework of hybrid neural network - hidden Markov model (NN-HMM) system on TIMIT phoneme sequence recognition task.
Abstract: This paper explores the use of auditory features based on cochleograms; two dimensional speech features derived from gammatone filters within the convolutional neural network (CNN) framework. Furthermore, we also propose various possibilities to combine cochleogram features with log-mel filter banks or spectrogram features. In particular, we combine within low and high levels of CNN framework which we refer to as low-level and high-level feature combination. As comparison, we also construct the similar configuration with deep neural network (DNN). Performance was evaluated in the framework of hybrid neural network - hidden Markov model (NN-HMM) system on TIMIT phoneme sequence recognition task. The results reveal that cochleogram-spectrogram feature combination provides significant advantages. The best accuracy was obtained by high-level combination of two dimensional cochleogram-spectrogram features using CNN, achieved up to 8.2% relative phoneme error rate (PER) reduction from CNN single features or 19.7% relative PER reduction from DNN single features.

18 citations


Network Information
Related Topics (5)
Recurrent neural network
29.2K papers, 890K citations
76% related
Feature (machine learning)
33.9K papers, 798.7K citations
75% related
Feature vector
48.8K papers, 954.4K citations
74% related
Natural language
31.1K papers, 806.8K citations
73% related
Deep learning
79.8K papers, 2.1M citations
72% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202324
202262
202167
202086
201977
201895