scispace - formally typeset
S

Sivanand Achanta

Researcher at International Institute of Information Technology, Hyderabad

Publications -  13
Citations -  227

Sivanand Achanta is an academic researcher from International Institute of Information Technology, Hyderabad. The author has contributed to research in topics: Recurrent neural network & Time delay neural network. The author has an hindex of 6, co-authored 13 publications receiving 191 citations.

Papers
More filters
Journal ArticleDOI

Query-by-example spoken term detection using frequency domain linear prediction and non-segmental dynamic time warping

TL;DR: A variant of DTW based algorithm referred to as non-segmental DTW (NS-DTW) is used, with a computational upper bound of O (mn) and analyzed the performance of QbE-STD with Gaussian posteriorgrams obtained from spectral and temporal features of the speech signal, showing that frequency domain linear prediction cepstral coefficients can be used as an alternative to traditional spectral parameters.
Proceedings ArticleDOI

SFF Anti-Spoofer: IIIT-H Submission for Automatic Speaker Verification Spoofing and Countermeasures Challenge 2017.

TL;DR: The experimental results on ASVspoof 2017 dataset reveal that, SFF based representation is very effective in detecting replay attacks and the score level fusion of back end classifiers further improved the performance of the system which indicates that both classifiers capture complimentary information.
Proceedings ArticleDOI

An Investigation of Deep Neural Network Architectures for Language Recognition in Indian Languages.

TL;DR: In this paper, deep neural networks are investigated for language identification in Indian languages and an attention mechanism based DNN architecture is proposed for utterance level classification there by efficiently making use of the context.
Proceedings ArticleDOI

An investigation of recurrent neural network architectures for statistical parametric speech synthesis.

TL;DR: It is shown that clockwork RNN is equivalent to an Elman RNN with a particular form of LI, and this perspective enables us to understand the reason why a simple ElmanRNN with LI units performs well on sequential tasks.
Journal ArticleDOI

Deep Elman recurrent neural networks for statistical parametric speech synthesis

TL;DR: Deep Elman RNNs are better suited for acoustic modeling in SPSS when compared to DNNs and perform competitively to deep SLSTMs, GRUs and LSTM, and context representation learning using ElmanRNNs improves neural network acoustic models for S PSS.