scispace - formally typeset
Journal ArticleDOI

The acoustic features of human laughter

Reads0
Chats0
TLDR
Recording of naturally produced laugh bouts recorded from 97 young adults as they watched funny video clips revealed evident diversity in production modes, remarkable variability in fundamental frequency characteristics, and consistent lack of articulation effects in supralaryngeal filtering are of particular interest.
Abstract
Remarkably little is known about the acoustic features of laughter. Here, acoustic outcomes are reported for 1024 naturally produced laugh bouts recorded from 97 young adults as they watched funny video clips. Analyses focused on temporal features, production modes, source- and filter-related effects, and indexical cues to laugher sex and individual identity. Although a number of researchers have previously emphasized stereotypy in laughter, its acoustics were found now to be variable and complex. Among the variety of findings reported, evident diversity in production modes, remarkable variability in fundamental frequency characteristics, and consistent lack of articulation effects in supralaryngeal filtering are of particular interest. In addition, formant-related filtering effects were found to be disproportionately important as acoustic correlates of laugher sex and individual identity. These outcomes are examined in light of existing data concerning laugh acoustics, as well as a number of hypotheses and conjectures previously advanced about this species-typical vocal signal.

read more

Content maybe subject to copyright    Report

Citations
More filters
Proceedings ArticleDOI

A Multimodal Predictive Model of Successful Debaters or How I Learned to Sway Votes

TL;DR: A database consisting of over 30 debates with four speakers per debate suitable for public speaking skill analysis is created and it is revealed that multimodal machine learning approaches can reliably predict which individual or team is going to win the most votes in the debate.
Journal ArticleDOI

Laughter Classification Using Deep Rectifier Neural Networks with a Minimal Feature Subset

TL;DR: This study applies Deep Neural Networks for laughter detection, as this technology is nowadays considered state-of-the-art in similar tasks like phoneme identification.
Journal ArticleDOI

Hillary Clinton's Laughter in Media Interviews

Daniel C. O’Connell, +1 more
- 01 Jan 2004 - 
TL;DR: In this paper, a corpus of laughter from four TV and two radio interviews of Hillary Clinton was analyzed by means of the PRAAT software (www.praat.org) for acoustic measurements.

Evaluating automatic laughter segmentation in meetings using acoustic and acoustic-phonetic features

TL;DR: The results show that the acousticphonetic features perform relatively well given their sparseness, and it is believed that incorporating phonetic knowledge could lead to improvement of the automatic laughter detector.
Book ChapterDOI

Coding of Static Information in Terrestrial Mammal Vocal Signals

TL;DR: The body of research reviewed in this chapter illustrates how combining observational, acoustics, experimental, and comparative approaches enables researchers to draw general conclusions about the selection pressures driving the evolution of vocal production and perception in mammals.
References
More filters
Journal ArticleDOI

Control Methods Used in a Study of the Vowels

TL;DR: Control methods used in the evaluation of effects of language and dialectal backgrounds and vocal and auditory characteristics of the individuals concerned in a vowel study program at Bell Telephone Laboratories are discussed.
Book

A course in phonetics

TL;DR: In this paper, the authors introduce articulatory phonetics phonology and phonetic transcription, including the Consonants of English English vowels and English words and sentences, as well as the international phonetic alphabet feature hierarchy performance exercises.
Journal ArticleDOI

Acoustic characteristics of American English vowels

TL;DR: Analysis of the formant data shows numerous differences between the present data and those of PB, both in terms of average frequencies of F1 and F2, and the degree of overlap among adjacent vowels.
Journal ArticleDOI

Sound on the rebound: Bringing form and function back to the forefront in understanding nonhuman primate vocal signaling

TL;DR: This review examines some difficulties engendered by a linguistically inspired, meaning-based view of primate calls, specifically that vocalizations are arbitrarily structured vehicles for transmitting encoded referential information, and suggests two ways in which acoustic structure may be tied to simple, nonlinguistic functions in primate vocalizations.
Journal ArticleDOI

Subharmonics, biphonation, and deterministic chaos in mammal vocalization

TL;DR: It is suggested that a variety of nonlinear phenomena including subharmonics, biphonation, and deterministic chaos are normally occurring phonatory events in mammalian vocalizations.