scispace - formally typeset
F

Farshad Almasganj

Researcher at Amirkabir University of Technology

Publications -  78
Citations -  731

Farshad Almasganj is an academic researcher from Amirkabir University of Technology. The author has contributed to research in topics: Feature extraction & Hidden Markov model. The author has an hindex of 14, co-authored 78 publications receiving 599 citations.

Papers
More filters
Journal ArticleDOI

Optimal selection of wavelet-packet-based features using genetic algorithm in pathological assessment of patients' speech signal with unilateral vocal fold paralysis

TL;DR: Embedded entropy feature, in comparison with energy, demonstrates a more efficient description of such pathological voices and provides a valuable tool for clinical diagnosis of unilateral laryngeal paralysis.
Journal ArticleDOI

Pathological assessment of patients’ speech signals using nonlinear dynamical analysis

TL;DR: The performance of nonlinear dynamics and acoustical perturbation features is evaluated in order to distinguish patients with vocal fold disorder and other normal cases and to compare and contrast the effectiveness of such approaches.
Journal ArticleDOI

Support vector wavelet adaptation for pathological voice assessment

TL;DR: A wavelet-based method to distinguish between normal and disordered voices is proposed and it is observed that a genetic algorithm is able to find the filter bank parameters such that a 100% correct classification rate is achieved in classifying normal and pathological voices.
Journal ArticleDOI

A Gaussian mixture model based cost function for parameter estimation of chaotic biological systems

TL;DR: A novel cost function is proposed to overcome limitations by building a statistical model on the distribution of the real system attractor in state space by using the use of a likelihood score in a Gaussian mixture model (GMM) fitted to the observed attractor generated by thereal system.
Proceedings ArticleDOI

Lip-reading via a DNN-HMM hybrid system using combination of the image-based and model-based features

TL;DR: The results indicate that the high level information extracted from deep layers of the lips ROI can represent the visual modality with advantage of “high amount of information in a low dimension feature vector”.