scispace - formally typeset
A

A. Revathi

Researcher at Shanmugha Arts, Science, Technology & Research Academy

Publications -  33
Citations -  254

A. Revathi is an academic researcher from Shanmugha Arts, Science, Technology & Research Academy. The author has contributed to research in topics: Speaker recognition & Feature (machine learning). The author has an hindex of 8, co-authored 30 publications receiving 214 citations. Previous affiliations of A. Revathi include National Institute of Technology, Tiruchirappalli.

Papers
More filters

Text Independent Speaker Recognition and Speaker Independent Speech Recognition Using Iterative Clustering Approach

TL;DR: The utilization of clustering models developed for the training data is emphasized to obtain better accuracy as 91%, 91% and 99.5% for mel frequency perceptual linear predictive cepstrum with respect to three categories such as speaker identification, isolated digit recognition and continuous speech recognition.
Proceedings ArticleDOI

Speaker independent continuous speech and isolated digit recognition using VQ and HMM

TL;DR: Perceptual linear predictive cepstrum yields the accuracy of 86% and 93% for speaker independent isolated digit recognition using VQ and combination of VQ & HMM speech models respectively.
Journal ArticleDOI

Robust emotion recognition from speech: Gamma tone features and models

TL;DR: The effectiveness and efficiency in selecting the energy features by passing the speech through the Gamma tone filters spaced in Equivalent rectangular bandwidth (ERB, MEL and BARK scale) and modelling techniques provides complementary evidence in assessing the performance of the system.
Proceedings ArticleDOI

Text Independent Composite Speaker Identification/Verification Using Multiple Features

TL;DR: In this work, F-ratio is computed as a theoretical measure to validate the experimental results for both identification and verification of composite speaker identification/verification.
Proceedings ArticleDOI

Speech recognition of deaf and hard of hearing people using hybrid neural network

TL;DR: This paper describes isolated word recognition of deaf students by unsupervised and supervised neural network using combination of SOFM and BPN neural network for recognition.