scispace - formally typeset
Open Access

On the Correlation between Perceptual and Contextual Aspects of Laughter in Meetings

Reads0
Chats0
TLDR
This article analyzed over 13000 bouts of laughter, in over 65 hours of unscripted, naturally occurring multiparty meetings, to identify discriminative contexts of voiced and unvoiced laughter.
Abstract
We have analyzed over 13000 bouts of laughter, in over 65 hours of unscripted, naturally occurring multiparty meetings, to identify discriminative contexts of voiced and unvoiced laughter Our results show that, in meetings, laughter is quite frequent, accounting for almost 10% of all vocal activity effort by time Approximately a third of all laughter is unvoiced, but meeting participants vary extensively in how often they employ voicing during laughter In spite of this variability, laughter appears to exhibit robust temporal characteristics Voiced laughs are on average longer than unvoiced laughs, and appear to correlate with temporally adjacent voiced laughter from other participants, as well as with speech from the laugher Unvoiced laughter appears to occur independently of vocal activity from other participants

read more

Content maybe subject to copyright    Report

Citations
More filters

Laughter annotations in conversational speech corpora - possibilities and limitations for phonetic analysis

TL;DR: It is found that overlapping laughs are longer in duration and are generally more voiced than non-overlapping laughs, and for a finer-grained acoustic analysis, a manual re-labeling of the laughs adhering to a more standardized laughter annotations protocol would be optimal.
Proceedings ArticleDOI

Contrasting emotion-bearing laughter types in multiparticipant vocal activity detection for meetings

TL;DR: Evidence which suggests that ignoring unvoicing laughter improves the prediction of emotional involvement in collocated speech is presented, making a case for the distinction between voiced and unvoiced laughter during laughter detection.
Book ChapterDOI

Detection of Laughter-in-Interaction in Multichannel Close-Talk Microphone Recordings of Meetings

TL;DR: A system for the detection of laughter, and its attribution to specific participants, which relies on simultaneously decoding the vocal activity of all participants given multi-channel recordings is presented, allowing for disambiguate laughter and speech not only acoustically, but also by constraining the number of simultaneous speakers.

Laughter in Social Robotics – no laughing matter

TL;DR: In this paper, the authors investigated the effects of laughter when combined with an android's motion and presented to uninformed participants, during playful interaction with another human, and found that the social effect of laughter heavily depends on at least the following three factors: First, the situational context, which is not only determined by the task at hand, but also by linguistic content as well as non-verbal expressions; second, the type and quality of laughter synthesis in combination with an artificial laugher's outer appearance; and third, the interaction dynamics, which are partly depending on a perce
References
More filters
Journal ArticleDOI

A Coefficient of agreement for nominal Scales

TL;DR: In this article, the authors present a procedure for having two or more judges independently categorize a sample of units and determine the degree, significance, and significance of the units. But they do not discuss the extent to which these judgments are reproducible, i.e., reliable.
Journal ArticleDOI

Induction of Decision Trees

J. R. Quinlan
- 25 Mar 1986 - 
TL;DR: In this paper, an approach to synthesizing decision trees that has been used in a variety of systems, and it describes one such system, ID3, in detail, is described, and a reported shortcoming of the basic algorithm is discussed.

Induction of Decision Trees

Quinlan
TL;DR: In this paper, an approach to synthesizing decision trees that has been used in a variety of systems, and it describes one such system, ID3, in detail, is described, and a reported shortcoming of the basic algorithm is discussed.
Proceedings ArticleDOI

The ICSI Meeting Corpus

TL;DR: A corpus of data from natural meetings that occurred at the International Computer Science Institute in Berkeley, California over the last three years is collected, which supports work in automatic speech recognition, noise robustness, dialog modeling, prosody, rich transcription, information retrieval, and more.
Book ChapterDOI

The AMI meeting corpus: a pre-announcement

TL;DR: The AMI Meeting Corpus as mentioned in this paper is a multi-modal data set consisting of 100 hours of meeting recordings, which is being created in the context of a project that is developing meeting browsing technology and will eventually be released publicly.
Related Papers (5)