scispace - formally typeset
A

A. Shahina

Researcher at Sri Sivasubramaniya Nadar College of Engineering

Publications -  31
Citations -  288

A. Shahina is an academic researcher from Sri Sivasubramaniya Nadar College of Engineering. The author has contributed to research in topics: Throat microphone & Microphone. The author has an hindex of 8, co-authored 27 publications receiving 191 citations. Previous affiliations of A. Shahina include Anna University & Indian Institute of Technology Madras.

Papers
More filters
Journal ArticleDOI

Deep learning approach to detect seizure using reconstructed phase space images

TL;DR: The result of the proposed approach shows the prospect of employing RPS images with CNN for predicting epileptic seizures, and the performance of the convolution neural network (CNN) model is better than the other existing statistical approach for all performance indicators such as accuracy, sensitivity, and specificity.
Journal ArticleDOI

Mapping speech spectra from throat microphone to close-speaking microphone: a neural network approach

TL;DR: A neural network model is used to capture the speaker-dependent functional relationship between the feature vectors (cepstral coefficients) of the two speech signals and a method is proposed to ensure the stability of the all-pole synthesis filter.
Proceedings ArticleDOI

G-Eyenet: A Convolutional Autoencoding Classifier Framework for the Detection of Glaucoma from Retinal Fundus Images

TL;DR: A novel deep learning multi-model network termed G-EyeNet for glaucoma detection from retinal fundus images that is jointly optimized for minimizing both image reconstruction error and the classification error based on a multi-task learning procedure.
Proceedings ArticleDOI

Language identification in noisy environments using throat microphone signals

TL;DR: The results of this study show that the throat microphone speech-based language identification system performs well in noisy environments.
Proceedings ArticleDOI

Combining spectral features of standard and Throat Microphones for speaker identification

TL;DR: By combining the evidence from both the NM and TM based systems using late integration, an improvement in performance is observed from about 91% (obtained using NM features alone) to 94% (NM and TM combined).