scispace - formally typeset
Journal ArticleDOI

Hearing smiles and frowns in normal and whisper registers.

Reads0
Chats0
TLDR
Listeners' abilities to detect facial expression in unfamiliar speech in normal and whisper registers were measured and judgments of frowning and its relative happiness were significantly poorer for lip-rounded vowels, suggesting that listeners may recover lip protrusion in making judgments.
Abstract
Two experiments measured listeners’ abilities to detect facial expression in unfamiliar speech in normal and whisper registers. Acoustic differences between speech produced with neutral or marked facial expression were also assessed. Experiment 1 showed that in a forced‐choice identification task, listeners could accurately select frowned speech as such, and neutral speech as happier sounding than frowned speech in the same speakers. Listeners were able to judge frowning in the same speakers’ whispered speech. Relative to neutral speech, frowning lowers formant frequencies and increases syllable duration. In both registers, judgments of frowning and its relative happiness were significantly poorer for lip‐rounded vowels, suggesting that listeners may recover lip protrusion in making judgments. Experiment 2 replicated the finding [V. Tartter, Percept. Psychophys. 27, 24–27 (1980)] that listeners can select speech produced with a smile as happier sounding than neutral speech in normal register, and extended the findings to whisper register. Relative to neutral, smiling increased second formant frequency. Results are discussed with respect to nonverbal auditory emotion prototypes and with respect to the direct realist theory of speech perception.

read more

Citations
More filters
Journal ArticleDOI

Communication of emotions in vocal expression and music performance: different channels, same code?

TL;DR: A review of 104 studies of vocal expression and 41 studies of music performance reveals similarities between the two channels concerning (a) the accuracy with which discrete emotions were communicated to listeners and (b) the emotion-specific patterns of acoustic cues used to communicate each emotion as mentioned in this paper.
Journal ArticleDOI

The evolution of speech: a comparative review

TL;DR: Comparative analysis of living species provides a viable alternative to fossil data for understanding the evolution of speech, and suggests that the neural basis for vocal mimicry and for mimesis in general remains unknown.
Journal ArticleDOI

Vocal tract length and formant frequency dispersion correlate with body size in rhesus macaques

TL;DR: Formant dispersion is the averaged difference between successive formant frequencies, and was found to be closely tied to both vocal tract length and body size in macaques, and probably many other species.
Journal ArticleDOI

The perception of emotions by ear and by eye

TL;DR: In this paper, the authors used a bimodal perception situation modelled after the McGurk paradigm, in which varying degrees of discordance can be created between the affects expressed in a face and in a tone of voice.
Journal ArticleDOI

Calls out of chaos: the adaptive significance of nonlinear phenomena in mammalian vocal production

TL;DR: It is suggested that nonlinear phenomena may subserve individual recognition and the estimation of size or fluctuating asymmetry from vocalizations, and neurally ‘cheap’ unpredictability may serve the valuable adaptive function of making chaotic calls difficult to predict and ignore.