scispace - formally typeset
S

Shrikanth S. Narayanan

Researcher at University of Southern California

Publications -  1164
Citations -  37134

Shrikanth S. Narayanan is an academic researcher from University of Southern California. The author has contributed to research in topics: Computer science & Speech processing. The author has an hindex of 83, co-authored 1087 publications receiving 31812 citations. Previous affiliations of Shrikanth S. Narayanan include University of Pennsylvania & Steel Authority of India.

Papers
More filters
Journal ArticleDOI

IEMOCAP: interactive emotional dyadic motion capture database

TL;DR: A new corpus named the “interactive emotional dyadic motion capture database” (IEMOCAP), collected by the Speech Analysis and Interpretation Laboratory at the University of Southern California (USC), which provides detailed information about their facial expressions and hand movements during scripted and spontaneous spoken communication scenarios.
Journal ArticleDOI

The Geneva Minimalistic Acoustic Parameter Set (GeMAPS) for Voice Research and Affective Computing

TL;DR: A basic standard acoustic parameter set for various areas of automatic voice analysis, such as paralinguistic or clinical speech analysis, is proposed and intended to provide a common baseline for evaluation of future research and eliminate differences caused by varying parameter sets or even different implementations of the same parameters.
Journal ArticleDOI

Toward detecting emotions in spoken dialogs

TL;DR: This paper explores the detection of domain-specific emotions using language and discourse information in conjunction with acoustic correlates of emotion in speech signals on a case study of detecting negative and non-negative emotions using spoken language data obtained from a call center application.
Proceedings ArticleDOI

Analysis of emotion recognition using facial expressions, speech and multimodal information

TL;DR: Results reveal that the system based on facial expression gave better performance than the systembased on just acoustic information for the emotions considered, and that when these two modalities are fused, the performance and the robustness of the emotion recognition system improve measurably.
Journal ArticleDOI

Acoustics of children's speech: developmental changes of temporal and spectral parameters.

TL;DR: The results confirm that the reduction in magnitude and within-subject variability of both temporal and spectral acoustic parameters with age is a major trend associated with speech development in normal children, and support the hypothesis of uniform axial growth of the vocal tract for male speakers.