scispace - formally typeset
Search or ask a question
Topic

TIMIT

About: TIMIT is a research topic. Over the lifetime, 1401 publications have been published within this topic receiving 59888 citations. The topic is also known as: TIMIT Acoustic-Phonetic Continuous Speech Corpus.


Papers
More filters
Proceedings ArticleDOI
01 Dec 2013
TL;DR: This work treats vowels and semivowels as vowellike regions, and an analysis of the spurious vowel-like regions (VLRs) detected by a signal processing based method using excitation source information is demonstrated.
Abstract: This work treats vowels and semivowels as vowellike regions. An analysis of the spurious vowel-like regions (VLRs) detected by a signal processing based method using excitation source information is demonstrated. Limitation of excitation information in detecting some of the nasals and voiced consonants as non-VLRs is discussed. An attempt to reduce spurious VLRs compared to the existing signal processing based method for VLRs detection [1] is made. A multi-class statistical phone classifier that classifies speech into broad vowel, consonant and silence categories is trained. The outputs of the classifier are suitably combined to get evidence for vowel-like regions, different broad categories of consonants and silence regions. The output from the existing signal processing method is compared with different evidences from the statistical method. The spurious ones are eliminated by using the evidences from the statistical method. The experimental studies conducted on TIMIT and inhouse databases demonstrate significant reduction in the spurious VLRs with a little loss in the VLRs detection performance. A net gain of 4.21% and 7.71% in frame error rate is achieved for TIMIT and in-house databases, respectively.

11 citations

Journal ArticleDOI
TL;DR: This paper improves upon previous work to find that several different dimensionality reduction techniques (SVD, PARAFAC2, KLT), followed by a nonlinear transform provided by a multilayer perceptron, provides a significant gain in phone recognition accuracy on the TIMIT task.
Abstract: In recent studies, we and others have found that conditional random fields (CRFs) can be effectively used to perform phone classification and recognition tasks by combining non-Gaussian distributed representations of acoustic input. In previous work by I. Heintz (latent phonetic analysis: Use of singular value decomposition to determine features for CRF phone recognition, Proc. ICASSP, pp. 4541-4544, 2008), we experimented with combining phonological feature posterior estimators and phone posterior estimators within a CRF framework; we found that treating posterior estimates as terms in a ldquophoneme information retrievalrdquo task allowed for a more effective use of multiple posterior streams than directly feeding these acoustic representations to the CRF recognizer. In this paper, we examine some of the design choices in our previous work, and extend our results to up to six acoustic feature streams. We concentrate on feature design, rather than feature selection, to find the best way of combining features for introduction into a log-linear model. We improve upon our previous work to find that several different dimensionality reduction techniques (SVD, PARAFAC2, KLT), followed by a nonlinear transform provided by a multilayer perceptron, provides a significant gain in phone recognition accuracy on the TIMIT task.

11 citations

Journal ArticleDOI
TL;DR: A conditional generative model designed to model the joint and symmetric conditions of both noisy and estimated clean spectra, which leads to better PESQ and STOI in all tested noise conditions.
Abstract: Deep learning-based speech enhancement approaches like deep neural networks (DNN) and Long Short-Term Memory (LSTM) have already demonstrated superior results to classical methods. However, these methods do not take full advantage of temporal context information. While DNN and LSTM consider temporal context in the noisy source speech, it does not do so for the estimated clean speech. Both DNN and LSTM also have a tendency to over-smooth spectra, which causes the enhanced speech to sound muffled. This paper proposes a novel architecture to address both issues, which we term a conditional generative model (CGM). By adopting an adversarial training scheme applied to a generator of deep dilated convolutional layers, CGM is designed to model the joint and symmetric conditions of both noisy and estimated clean spectra. We evaluate CGM against both DNN and LSTM in terms of Perceptual Evaluation of Speech Quality (PESQ) and Short-Time Objective Intelligibility (STOI) on TIMIT sentences corrupted by ITU-T P.501 and NOISEX-92 noise in a range of matched and mismatched noise conditions. Results show that both the CGM architecture and the adversarial training mechanism lead to better PESQ and STOI in all tested noise conditions. In addition to yielding significant improvements in PESQ and STOI, CGM and adversarial training both mitigate against over-smoothing.

11 citations

Proceedings ArticleDOI
25 Aug 2013
TL;DR: In this paper, a non-negative matrix factorization (NMF)-based approach to the semi-supervised single-channel speech enhancement problem where only nonstationary additive noise signals are given.
Abstract: This paper investigates a non-negative matrix factorization (NMF)-based approach to the semi-supervised single-channel speech enhancement problem where only non-stationary additive noise signals are given. The proposed method relies on sinusoidal model of speech production which is integrated inside NMF framework using linear constraints on dictionary atoms. This method is further developed to regularize harmonic amplitudes. Simple multiplicative algorithms are presented. The experimental evaluation was made on TIMIT corpus mixed with various types of noise. It has been shown that the proposed method outperforms some of the state-of-the-art noise suppression techniques in terms of signal-to-noise ratio.

11 citations

Proceedings ArticleDOI
25 Oct 2020
TL;DR: It is shown that the proposed method outperforms the speech enhancement methods that use Deep Neural Network (DNN) based speech enhancement or a Speech Enhancement Generative Adversarial Network (SEGAN).
Abstract: Speech enhancement is an essential component in robust automatic speech recognition (ASR) systems. Most speech enhancement methods are nowadays based on neural networks that use feature-mapping or mask-learning. This paper proposes a novel speech enhancement method that integrates time-domain feature mapping and mask learning into a unified framework using a Generative Adversarial Network (GAN). The proposed framework processes the received waveform and decouples speech and noise signals, which are fed into two short-time Fourier transform (STFT) convolution 1-D layers that map the waveforms to spectrograms in the complex domain. These speech and noise spectrograms are then used to compute the speech mask loss. The proposed method is evaluated using the TIMIT data set for seen and unseen signal-to-noise ratio conditions. It is shown that the proposed method outperforms the speech enhancement methods that use Deep Neural Network (DNN) based speech enhancement or a Speech Enhancement Generative Adversarial Network (SEGAN).

11 citations


Network Information
Related Topics (5)
Recurrent neural network
29.2K papers, 890K citations
76% related
Feature (machine learning)
33.9K papers, 798.7K citations
75% related
Feature vector
48.8K papers, 954.4K citations
74% related
Natural language
31.1K papers, 806.8K citations
73% related
Deep learning
79.8K papers, 2.1M citations
72% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202324
202262
202167
202086
201977
201895